00:00:00.001 Started by upstream project "autotest-per-patch" build number 132495 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.079 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.079 The recommended git tool is: git 00:00:00.080 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.144 Fetching changes from the remote Git repository 00:00:00.147 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.213 Using shallow fetch with depth 1 00:00:00.213 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.213 > git --version # timeout=10 00:00:00.274 > git --version # 'git version 2.39.2' 00:00:00.274 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.317 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.317 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.331 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.343 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.355 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.355 > git config core.sparsecheckout # timeout=10 00:00:06.365 > git read-tree -mu HEAD # timeout=10 00:00:06.381 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.406 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.407 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.605 [Pipeline] Start of Pipeline 00:00:06.616 [Pipeline] library 00:00:06.617 Loading library shm_lib@master 00:00:06.617 Library shm_lib@master is cached. Copying from home. 00:00:06.636 [Pipeline] node 00:00:06.644 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.645 [Pipeline] { 00:00:06.653 [Pipeline] catchError 00:00:06.654 [Pipeline] { 00:00:06.663 [Pipeline] wrap 00:00:06.670 [Pipeline] { 00:00:06.674 [Pipeline] stage 00:00:06.676 [Pipeline] { (Prologue) 00:00:06.880 [Pipeline] sh 00:00:07.166 + logger -p user.info -t JENKINS-CI 00:00:07.185 [Pipeline] echo 00:00:07.186 Node: CYP12 00:00:07.193 [Pipeline] sh 00:00:07.496 [Pipeline] setCustomBuildProperty 00:00:07.511 [Pipeline] echo 00:00:07.512 Cleanup processes 00:00:07.518 [Pipeline] sh 00:00:07.802 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.802 285239 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.816 [Pipeline] sh 00:00:08.106 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.106 ++ grep -v 'sudo pgrep' 00:00:08.106 ++ awk '{print $1}' 00:00:08.106 + sudo kill -9 00:00:08.106 + true 00:00:08.126 [Pipeline] cleanWs 00:00:08.138 [WS-CLEANUP] Deleting project workspace... 00:00:08.138 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.146 [WS-CLEANUP] done 00:00:08.153 [Pipeline] setCustomBuildProperty 00:00:08.170 [Pipeline] sh 00:00:08.463 + sudo git config --global --replace-all safe.directory '*' 00:00:08.582 [Pipeline] httpRequest 00:00:09.365 [Pipeline] echo 00:00:09.367 Sorcerer 10.211.164.20 is alive 00:00:09.377 [Pipeline] retry 00:00:09.379 [Pipeline] { 00:00:09.393 [Pipeline] httpRequest 00:00:09.398 HttpMethod: GET 00:00:09.398 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.399 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.402 Response Code: HTTP/1.1 200 OK 00:00:09.402 Success: Status code 200 is in the accepted range: 200,404 00:00:09.402 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.795 [Pipeline] } 00:00:09.813 [Pipeline] // retry 00:00:09.820 [Pipeline] sh 00:00:10.109 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.127 [Pipeline] httpRequest 00:00:10.491 [Pipeline] echo 00:00:10.493 Sorcerer 10.211.164.20 is alive 00:00:10.503 [Pipeline] retry 00:00:10.506 [Pipeline] { 00:00:10.521 [Pipeline] httpRequest 00:00:10.525 HttpMethod: GET 00:00:10.526 URL: http://10.211.164.20/packages/spdk_e4a86cc92d2ca9848da5ed47ac55d62cf93b6dd9.tar.gz 00:00:10.526 Sending request to url: http://10.211.164.20/packages/spdk_e4a86cc92d2ca9848da5ed47ac55d62cf93b6dd9.tar.gz 00:00:10.529 Response Code: HTTP/1.1 404 Not Found 00:00:10.529 Success: Status code 404 is in the accepted range: 200,404 00:00:10.530 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e4a86cc92d2ca9848da5ed47ac55d62cf93b6dd9.tar.gz 00:00:10.534 [Pipeline] } 00:00:10.551 [Pipeline] // retry 00:00:10.560 [Pipeline] sh 00:00:10.853 + rm -f spdk_e4a86cc92d2ca9848da5ed47ac55d62cf93b6dd9.tar.gz 00:00:10.869 [Pipeline] retry 00:00:10.871 [Pipeline] { 00:00:10.893 [Pipeline] checkout 00:00:10.902 The recommended git tool is: NONE 00:00:10.914 using credential 00000000-0000-0000-0000-000000000002 00:00:10.917 Wiping out workspace first. 00:00:10.952 Cloning the remote Git repository 00:00:10.956 Honoring refspec on initial clone 00:00:10.959 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:10.960 > git init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk # timeout=10 00:00:10.968 Using reference repository: /var/ci_repos/spdk_multi 00:00:10.968 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:10.968 > git --version # timeout=10 00:00:10.973 > git --version # 'git version 2.45.2' 00:00:10.973 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:10.979 Setting http proxy: proxy-dmz.intel.com:911 00:00:10.979 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/65/25465/3 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:01:08.117 Avoid second fetch 00:01:08.136 Checking out Revision e4a86cc92d2ca9848da5ed47ac55d62cf93b6dd9 (FETCH_HEAD) 00:01:08.359 Commit message: "thread: move interrupt allocation to a function" 00:01:08.097 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:01:08.101 > git config --add remote.origin.fetch refs/changes/65/25465/3 # timeout=10 00:01:08.106 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:01:08.119 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:01:08.129 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:01:08.139 > git config core.sparsecheckout # timeout=10 00:01:08.144 > git checkout -f e4a86cc92d2ca9848da5ed47ac55d62cf93b6dd9 # timeout=10 00:01:08.361 > git rev-list --no-walk 1e9cebf1906bf9e4023a8547d868ff77a95aae6d # timeout=10 00:01:08.389 > git remote # timeout=10 00:01:08.394 > git submodule init # timeout=10 00:01:08.478 > git submodule sync # timeout=10 00:01:08.559 > git config --get remote.origin.url # timeout=10 00:01:08.569 > git submodule init # timeout=10 00:01:08.646 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:01:08.652 > git config --get submodule.dpdk.url # timeout=10 00:01:08.658 > git remote # timeout=10 00:01:08.663 > git config --get remote.origin.url # timeout=10 00:01:08.668 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:01:08.674 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:01:08.679 > git remote # timeout=10 00:01:08.685 > git config --get remote.origin.url # timeout=10 00:01:08.690 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:01:08.695 > git config --get submodule.isa-l.url # timeout=10 00:01:08.700 > git remote # timeout=10 00:01:08.705 > git config --get remote.origin.url # timeout=10 00:01:08.710 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:01:08.716 > git config --get submodule.ocf.url # timeout=10 00:01:08.721 > git remote # timeout=10 00:01:08.726 > git config --get remote.origin.url # timeout=10 00:01:08.732 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:01:08.737 > git config --get submodule.libvfio-user.url # timeout=10 00:01:08.742 > git remote # timeout=10 00:01:08.747 > git config --get remote.origin.url # timeout=10 00:01:08.753 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:01:08.763 > git config --get submodule.xnvme.url # timeout=10 00:01:08.768 > git remote # timeout=10 00:01:08.774 > git config --get remote.origin.url # timeout=10 00:01:08.779 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:01:08.785 > git config --get submodule.isa-l-crypto.url # timeout=10 00:01:08.790 > git remote # timeout=10 00:01:08.795 > git config --get remote.origin.url # timeout=10 00:01:08.800 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:01:08.806 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:08.806 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:08.806 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:08.806 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:08.807 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:08.807 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:08.807 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:08.811 Setting http proxy: proxy-dmz.intel.com:911 00:01:08.811 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:01:08.812 Setting http proxy: proxy-dmz.intel.com:911 00:01:08.812 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:01:08.812 Setting http proxy: proxy-dmz.intel.com:911 00:01:08.812 Setting http proxy: proxy-dmz.intel.com:911 00:01:08.812 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:01:08.812 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:01:08.813 Setting http proxy: proxy-dmz.intel.com:911 00:01:08.813 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:01:08.813 Setting http proxy: proxy-dmz.intel.com:911 00:01:08.813 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:01:08.813 Setting http proxy: proxy-dmz.intel.com:911 00:01:08.813 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:01:18.575 [Pipeline] dir 00:01:18.576 Running in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.577 [Pipeline] { 00:01:18.594 [Pipeline] sh 00:01:18.889 ++ nproc 00:01:18.889 + threads=144 00:01:18.889 + git repack -a -d --threads=144 00:01:25.476 + git submodule foreach git repack -a -d --threads=144 00:01:25.476 Entering 'dpdk' 00:01:33.616 Entering 'intel-ipsec-mb' 00:01:33.616 Entering 'isa-l' 00:01:33.616 Entering 'isa-l-crypto' 00:01:33.616 Entering 'libvfio-user' 00:01:33.616 Entering 'ocf' 00:01:33.616 Entering 'xnvme' 00:01:34.190 + find .git -type f -name alternates -print -delete 00:01:34.190 .git/objects/info/alternates 00:01:34.190 .git/modules/dpdk/objects/info/alternates 00:01:34.190 .git/modules/libvfio-user/objects/info/alternates 00:01:34.190 .git/modules/xnvme/objects/info/alternates 00:01:34.190 .git/modules/intel-ipsec-mb/objects/info/alternates 00:01:34.190 .git/modules/ocf/objects/info/alternates 00:01:34.190 .git/modules/isa-l/objects/info/alternates 00:01:34.190 .git/modules/isa-l-crypto/objects/info/alternates 00:01:34.201 [Pipeline] } 00:01:34.220 [Pipeline] // dir 00:01:34.226 [Pipeline] } 00:01:34.244 [Pipeline] // retry 00:01:34.254 [Pipeline] sh 00:01:34.544 + hash pigz 00:01:34.544 + tar -cf spdk_e4a86cc92d2ca9848da5ed47ac55d62cf93b6dd9.tar.gz -I pigz spdk 00:01:35.505 [Pipeline] retry 00:01:35.508 [Pipeline] { 00:01:35.523 [Pipeline] httpRequest 00:01:35.531 HttpMethod: PUT 00:01:35.531 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_e4a86cc92d2ca9848da5ed47ac55d62cf93b6dd9.tar.gz 00:01:35.534 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_e4a86cc92d2ca9848da5ed47ac55d62cf93b6dd9.tar.gz 00:01:42.277 Response Code: HTTP/1.1 200 OK 00:01:42.286 Success: Status code 200 is in the accepted range: 200 00:01:42.290 [Pipeline] } 00:01:42.308 [Pipeline] // retry 00:01:42.315 [Pipeline] echo 00:01:42.317 00:01:42.317 Locking 00:01:42.317 Waited 4s for lock 00:01:42.317 File already exists: /storage/packages/spdk_e4a86cc92d2ca9848da5ed47ac55d62cf93b6dd9.tar.gz 00:01:42.317 00:01:42.321 [Pipeline] sh 00:01:42.610 + git -C spdk log --oneline -n5 00:01:42.610 e4a86cc92 thread: move interrupt allocation to a function 00:01:42.610 e0fe8d229 util: add method for setting fd_group's wrapper 00:01:42.610 1e9cebf19 util: multi-level fd_group nesting 00:01:42.610 09301ca15 util: keep track of nested child fd_groups 00:01:42.610 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:01:42.624 [Pipeline] } 00:01:42.640 [Pipeline] // stage 00:01:42.650 [Pipeline] stage 00:01:42.653 [Pipeline] { (Prepare) 00:01:42.674 [Pipeline] writeFile 00:01:42.692 [Pipeline] sh 00:01:42.983 + logger -p user.info -t JENKINS-CI 00:01:42.999 [Pipeline] sh 00:01:43.289 + logger -p user.info -t JENKINS-CI 00:01:43.304 [Pipeline] sh 00:01:43.595 + cat autorun-spdk.conf 00:01:43.595 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.595 SPDK_TEST_NVMF=1 00:01:43.595 SPDK_TEST_NVME_CLI=1 00:01:43.595 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.595 SPDK_TEST_NVMF_NICS=e810 00:01:43.595 SPDK_TEST_VFIOUSER=1 00:01:43.595 SPDK_RUN_UBSAN=1 00:01:43.595 NET_TYPE=phy 00:01:43.603 RUN_NIGHTLY=0 00:01:43.609 [Pipeline] readFile 00:01:43.640 [Pipeline] withEnv 00:01:43.643 [Pipeline] { 00:01:43.657 [Pipeline] sh 00:01:43.951 + set -ex 00:01:43.951 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:43.951 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.951 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.951 ++ SPDK_TEST_NVMF=1 00:01:43.951 ++ SPDK_TEST_NVME_CLI=1 00:01:43.951 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.951 ++ SPDK_TEST_NVMF_NICS=e810 00:01:43.951 ++ SPDK_TEST_VFIOUSER=1 00:01:43.951 ++ SPDK_RUN_UBSAN=1 00:01:43.951 ++ NET_TYPE=phy 00:01:43.951 ++ RUN_NIGHTLY=0 00:01:43.951 + case $SPDK_TEST_NVMF_NICS in 00:01:43.951 + DRIVERS=ice 00:01:43.951 + [[ tcp == \r\d\m\a ]] 00:01:43.951 + [[ -n ice ]] 00:01:43.951 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:43.951 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:43.951 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:43.951 rmmod: ERROR: Module irdma is not currently loaded 00:01:43.951 rmmod: ERROR: Module i40iw is not currently loaded 00:01:43.951 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:43.951 + true 00:01:43.951 + for D in $DRIVERS 00:01:43.951 + sudo modprobe ice 00:01:43.951 + exit 0 00:01:43.962 [Pipeline] } 00:01:43.978 [Pipeline] // withEnv 00:01:43.984 [Pipeline] } 00:01:44.001 [Pipeline] // stage 00:01:44.012 [Pipeline] catchError 00:01:44.014 [Pipeline] { 00:01:44.032 [Pipeline] timeout 00:01:44.032 Timeout set to expire in 1 hr 0 min 00:01:44.035 [Pipeline] { 00:01:44.050 [Pipeline] stage 00:01:44.053 [Pipeline] { (Tests) 00:01:44.072 [Pipeline] sh 00:01:44.368 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.368 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.368 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.368 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:44.368 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.368 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.368 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:44.368 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.368 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.368 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.368 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:44.368 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.368 + source /etc/os-release 00:01:44.368 ++ NAME='Fedora Linux' 00:01:44.368 ++ VERSION='39 (Cloud Edition)' 00:01:44.368 ++ ID=fedora 00:01:44.368 ++ VERSION_ID=39 00:01:44.368 ++ VERSION_CODENAME= 00:01:44.368 ++ PLATFORM_ID=platform:f39 00:01:44.368 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:44.368 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:44.368 ++ LOGO=fedora-logo-icon 00:01:44.368 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:44.368 ++ HOME_URL=https://fedoraproject.org/ 00:01:44.368 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:44.368 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:44.368 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:44.368 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:44.368 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:44.368 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:44.368 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:44.368 ++ SUPPORT_END=2024-11-12 00:01:44.368 ++ VARIANT='Cloud Edition' 00:01:44.368 ++ VARIANT_ID=cloud 00:01:44.368 + uname -a 00:01:44.368 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:44.368 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:47.671 Hugepages 00:01:47.671 node hugesize free / total 00:01:47.671 node0 1048576kB 0 / 0 00:01:47.671 node0 2048kB 0 / 0 00:01:47.671 node1 1048576kB 0 / 0 00:01:47.671 node1 2048kB 0 / 0 00:01:47.671 00:01:47.671 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:47.671 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:47.671 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:47.671 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:47.671 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:47.671 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:47.671 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:47.671 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:47.671 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:47.671 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:47.671 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:47.671 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:47.671 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:47.671 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:47.671 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:47.671 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:47.671 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:47.671 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:47.671 + rm -f /tmp/spdk-ld-path 00:01:47.671 + source autorun-spdk.conf 00:01:47.671 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.671 ++ SPDK_TEST_NVMF=1 00:01:47.671 ++ SPDK_TEST_NVME_CLI=1 00:01:47.671 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.671 ++ SPDK_TEST_NVMF_NICS=e810 00:01:47.671 ++ SPDK_TEST_VFIOUSER=1 00:01:47.671 ++ SPDK_RUN_UBSAN=1 00:01:47.671 ++ NET_TYPE=phy 00:01:47.671 ++ RUN_NIGHTLY=0 00:01:47.671 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:47.671 + [[ -n '' ]] 00:01:47.671 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.671 + for M in /var/spdk/build-*-manifest.txt 00:01:47.671 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:47.671 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.671 + for M in /var/spdk/build-*-manifest.txt 00:01:47.671 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:47.671 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.671 + for M in /var/spdk/build-*-manifest.txt 00:01:47.671 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:47.671 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.671 ++ uname 00:01:47.671 + [[ Linux == \L\i\n\u\x ]] 00:01:47.671 + sudo dmesg -T 00:01:47.671 + sudo dmesg --clear 00:01:47.671 + dmesg_pid=288688 00:01:47.671 + [[ Fedora Linux == FreeBSD ]] 00:01:47.671 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.671 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.671 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:47.671 + [[ -x /usr/src/fio-static/fio ]] 00:01:47.671 + export FIO_BIN=/usr/src/fio-static/fio 00:01:47.671 + FIO_BIN=/usr/src/fio-static/fio 00:01:47.671 + sudo dmesg -Tw 00:01:47.671 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:47.671 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:47.671 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:47.671 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.671 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.671 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:47.671 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.671 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.671 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.671 12:36:27 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.671 12:36:27 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.671 12:36:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.671 12:36:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:47.672 12:36:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:47.672 12:36:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.672 12:36:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:47.672 12:36:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:47.672 12:36:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:47.672 12:36:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:47.672 12:36:27 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:47.672 12:36:27 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:47.672 12:36:27 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.934 12:36:27 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.934 12:36:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:47.934 12:36:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:47.934 12:36:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.934 12:36:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.934 12:36:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.934 12:36:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.934 12:36:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.934 12:36:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.934 12:36:27 -- paths/export.sh@5 -- $ export PATH 00:01:47.934 12:36:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.934 12:36:27 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:47.934 12:36:27 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:47.934 12:36:27 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732534587.XXXXXX 00:01:47.934 12:36:27 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732534587.qU36NX 00:01:47.934 12:36:27 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:47.934 12:36:27 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:47.934 12:36:27 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:47.934 12:36:27 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:47.934 12:36:27 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.934 12:36:27 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:47.934 12:36:27 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:47.934 12:36:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.934 12:36:27 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:47.934 12:36:27 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:47.934 12:36:27 -- pm/common@17 -- $ local monitor 00:01:47.934 12:36:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.934 12:36:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.934 12:36:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.934 12:36:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.934 12:36:27 -- pm/common@21 -- $ date +%s 00:01:47.934 12:36:27 -- pm/common@25 -- $ sleep 1 00:01:47.934 12:36:27 -- pm/common@21 -- $ date +%s 00:01:47.934 12:36:27 -- pm/common@21 -- $ date +%s 00:01:47.934 12:36:27 -- pm/common@21 -- $ date +%s 00:01:47.934 12:36:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732534587 00:01:47.934 12:36:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732534587 00:01:47.934 12:36:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732534587 00:01:47.934 12:36:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732534587 00:01:47.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732534587_collect-vmstat.pm.log 00:01:47.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732534587_collect-cpu-load.pm.log 00:01:47.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732534587_collect-cpu-temp.pm.log 00:01:47.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732534587_collect-bmc-pm.bmc.pm.log 00:01:48.878 12:36:28 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:48.878 12:36:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.878 12:36:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.878 12:36:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:48.878 12:36:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.878 Mon Nov 25 11:36:28 AM UTC 2024 00:01:48.878 12:36:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:48.878 v25.01-pre-223-ge4a86cc92 00:01:48.878 12:36:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:48.878 12:36:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:48.878 12:36:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:48.878 12:36:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:48.878 12:36:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.878 12:36:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.878 ************************************ 00:01:48.878 START TEST ubsan 00:01:48.878 ************************************ 00:01:48.878 12:36:28 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:48.878 using ubsan 00:01:48.878 00:01:48.878 real 0m0.001s 00:01:48.878 user 0m0.001s 00:01:48.878 sys 0m0.000s 00:01:48.878 12:36:28 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:48.878 12:36:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.878 ************************************ 00:01:48.878 END TEST ubsan 00:01:48.878 ************************************ 00:01:48.878 12:36:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:48.878 12:36:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:48.878 12:36:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:48.878 12:36:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:48.878 12:36:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:48.878 12:36:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:48.878 12:36:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:48.878 12:36:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:48.878 12:36:28 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:49.140 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:49.140 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:49.400 Using 'verbs' RDMA provider 00:02:02.600 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:17.506 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:17.506 Creating mk/config.mk...done. 00:02:17.506 Creating mk/cc.flags.mk...done. 00:02:17.506 Type 'make' to build. 00:02:17.506 12:36:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:17.506 12:36:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:17.506 12:36:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:17.506 12:36:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.506 ************************************ 00:02:17.506 START TEST make 00:02:17.506 ************************************ 00:02:17.506 12:36:56 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:17.506 make[1]: Nothing to be done for 'all'. 00:02:18.892 The Meson build system 00:02:18.892 Version: 1.5.0 00:02:18.892 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:18.892 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:18.892 Build type: native build 00:02:18.892 Project name: libvfio-user 00:02:18.892 Project version: 0.0.1 00:02:18.892 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:18.892 C linker for the host machine: cc ld.bfd 2.40-14 00:02:18.892 Host machine cpu family: x86_64 00:02:18.892 Host machine cpu: x86_64 00:02:18.892 Run-time dependency threads found: YES 00:02:18.892 Library dl found: YES 00:02:18.892 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:18.892 Run-time dependency json-c found: YES 0.17 00:02:18.892 Run-time dependency cmocka found: YES 1.1.7 00:02:18.892 Program pytest-3 found: NO 00:02:18.892 Program flake8 found: NO 00:02:18.892 Program misspell-fixer found: NO 00:02:18.892 Program restructuredtext-lint found: NO 00:02:18.892 Program valgrind found: YES (/usr/bin/valgrind) 00:02:18.892 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.892 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.892 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.892 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:18.892 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:18.892 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:18.892 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:18.892 Build targets in project: 8 00:02:18.892 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:18.892 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:18.892 00:02:18.892 libvfio-user 0.0.1 00:02:18.892 00:02:18.892 User defined options 00:02:18.892 buildtype : debug 00:02:18.892 default_library: shared 00:02:18.892 libdir : /usr/local/lib 00:02:18.892 00:02:18.892 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:18.892 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:19.153 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:19.153 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:19.153 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:19.153 [4/37] Compiling C object samples/null.p/null.c.o 00:02:19.153 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:19.153 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:19.153 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:19.153 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:19.153 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:19.153 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:19.153 [11/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:19.153 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:19.153 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:19.153 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:19.153 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:19.153 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:19.153 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:19.153 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:19.153 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:19.153 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:19.153 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:19.153 [22/37] Compiling C object samples/server.p/server.c.o 00:02:19.153 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:19.153 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:19.153 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:19.153 [26/37] Compiling C object samples/client.p/client.c.o 00:02:19.153 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:19.153 [28/37] Linking target samples/client 00:02:19.413 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:19.413 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:19.413 [31/37] Linking target test/unit_tests 00:02:19.413 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:19.413 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:19.413 [34/37] Linking target samples/null 00:02:19.413 [35/37] Linking target samples/server 00:02:19.413 [36/37] Linking target samples/lspci 00:02:19.413 [37/37] Linking target samples/gpio-pci-idio-16 00:02:19.413 INFO: autodetecting backend as ninja 00:02:19.413 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:19.674 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:19.935 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:19.935 ninja: no work to do. 00:02:26.514 The Meson build system 00:02:26.514 Version: 1.5.0 00:02:26.514 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:26.514 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:26.514 Build type: native build 00:02:26.514 Program cat found: YES (/usr/bin/cat) 00:02:26.514 Project name: DPDK 00:02:26.514 Project version: 24.03.0 00:02:26.514 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:26.514 C linker for the host machine: cc ld.bfd 2.40-14 00:02:26.514 Host machine cpu family: x86_64 00:02:26.514 Host machine cpu: x86_64 00:02:26.514 Message: ## Building in Developer Mode ## 00:02:26.514 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:26.514 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:26.514 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:26.514 Program python3 found: YES (/usr/bin/python3) 00:02:26.514 Program cat found: YES (/usr/bin/cat) 00:02:26.514 Compiler for C supports arguments -march=native: YES 00:02:26.514 Checking for size of "void *" : 8 00:02:26.514 Checking for size of "void *" : 8 (cached) 00:02:26.514 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:26.514 Library m found: YES 00:02:26.514 Library numa found: YES 00:02:26.514 Has header "numaif.h" : YES 00:02:26.514 Library fdt found: NO 00:02:26.514 Library execinfo found: NO 00:02:26.514 Has header "execinfo.h" : YES 00:02:26.514 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:26.514 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:26.514 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:26.514 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:26.514 Run-time dependency openssl found: YES 3.1.1 00:02:26.514 Run-time dependency libpcap found: YES 1.10.4 00:02:26.514 Has header "pcap.h" with dependency libpcap: YES 00:02:26.514 Compiler for C supports arguments -Wcast-qual: YES 00:02:26.514 Compiler for C supports arguments -Wdeprecated: YES 00:02:26.514 Compiler for C supports arguments -Wformat: YES 00:02:26.514 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:26.514 Compiler for C supports arguments -Wformat-security: NO 00:02:26.514 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.514 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:26.514 Compiler for C supports arguments -Wnested-externs: YES 00:02:26.514 Compiler for C supports arguments -Wold-style-definition: YES 00:02:26.514 Compiler for C supports arguments -Wpointer-arith: YES 00:02:26.514 Compiler for C supports arguments -Wsign-compare: YES 00:02:26.514 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:26.514 Compiler for C supports arguments -Wundef: YES 00:02:26.514 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.514 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:26.514 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:26.514 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.514 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:26.514 Program objdump found: YES (/usr/bin/objdump) 00:02:26.514 Compiler for C supports arguments -mavx512f: YES 00:02:26.514 Checking if "AVX512 checking" compiles: YES 00:02:26.514 Fetching value of define "__SSE4_2__" : 1 00:02:26.514 Fetching value of define "__AES__" : 1 00:02:26.514 Fetching value of define "__AVX__" : 1 00:02:26.514 Fetching value of define "__AVX2__" : 1 00:02:26.514 Fetching value of define "__AVX512BW__" : 1 00:02:26.514 Fetching value of define "__AVX512CD__" : 1 00:02:26.514 Fetching value of define "__AVX512DQ__" : 1 00:02:26.514 Fetching value of define "__AVX512F__" : 1 00:02:26.514 Fetching value of define "__AVX512VL__" : 1 00:02:26.514 Fetching value of define "__PCLMUL__" : 1 00:02:26.514 Fetching value of define "__RDRND__" : 1 00:02:26.514 Fetching value of define "__RDSEED__" : 1 00:02:26.514 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:26.514 Fetching value of define "__znver1__" : (undefined) 00:02:26.514 Fetching value of define "__znver2__" : (undefined) 00:02:26.514 Fetching value of define "__znver3__" : (undefined) 00:02:26.514 Fetching value of define "__znver4__" : (undefined) 00:02:26.514 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:26.514 Message: lib/log: Defining dependency "log" 00:02:26.514 Message: lib/kvargs: Defining dependency "kvargs" 00:02:26.514 Message: lib/telemetry: Defining dependency "telemetry" 00:02:26.514 Checking for function "getentropy" : NO 00:02:26.514 Message: lib/eal: Defining dependency "eal" 00:02:26.514 Message: lib/ring: Defining dependency "ring" 00:02:26.514 Message: lib/rcu: Defining dependency "rcu" 00:02:26.514 Message: lib/mempool: Defining dependency "mempool" 00:02:26.514 Message: lib/mbuf: Defining dependency "mbuf" 00:02:26.514 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:26.514 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.514 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:26.514 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:26.514 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:26.514 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:26.514 Compiler for C supports arguments -mpclmul: YES 00:02:26.514 Compiler for C supports arguments -maes: YES 00:02:26.514 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.514 Compiler for C supports arguments -mavx512bw: YES 00:02:26.514 Compiler for C supports arguments -mavx512dq: YES 00:02:26.514 Compiler for C supports arguments -mavx512vl: YES 00:02:26.514 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:26.514 Compiler for C supports arguments -mavx2: YES 00:02:26.514 Compiler for C supports arguments -mavx: YES 00:02:26.514 Message: lib/net: Defining dependency "net" 00:02:26.514 Message: lib/meter: Defining dependency "meter" 00:02:26.514 Message: lib/ethdev: Defining dependency "ethdev" 00:02:26.514 Message: lib/pci: Defining dependency "pci" 00:02:26.514 Message: lib/cmdline: Defining dependency "cmdline" 00:02:26.514 Message: lib/hash: Defining dependency "hash" 00:02:26.514 Message: lib/timer: Defining dependency "timer" 00:02:26.514 Message: lib/compressdev: Defining dependency "compressdev" 00:02:26.514 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:26.514 Message: lib/dmadev: Defining dependency "dmadev" 00:02:26.514 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:26.514 Message: lib/power: Defining dependency "power" 00:02:26.514 Message: lib/reorder: Defining dependency "reorder" 00:02:26.514 Message: lib/security: Defining dependency "security" 00:02:26.514 Has header "linux/userfaultfd.h" : YES 00:02:26.514 Has header "linux/vduse.h" : YES 00:02:26.514 Message: lib/vhost: Defining dependency "vhost" 00:02:26.514 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:26.514 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:26.514 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:26.514 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:26.514 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:26.514 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:26.514 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:26.514 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:26.514 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:26.514 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:26.514 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:26.514 Configuring doxy-api-html.conf using configuration 00:02:26.514 Configuring doxy-api-man.conf using configuration 00:02:26.514 Program mandb found: YES (/usr/bin/mandb) 00:02:26.514 Program sphinx-build found: NO 00:02:26.514 Configuring rte_build_config.h using configuration 00:02:26.514 Message: 00:02:26.514 ================= 00:02:26.514 Applications Enabled 00:02:26.514 ================= 00:02:26.514 00:02:26.514 apps: 00:02:26.514 00:02:26.514 00:02:26.514 Message: 00:02:26.514 ================= 00:02:26.514 Libraries Enabled 00:02:26.514 ================= 00:02:26.514 00:02:26.514 libs: 00:02:26.514 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:26.514 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:26.514 cryptodev, dmadev, power, reorder, security, vhost, 00:02:26.514 00:02:26.514 Message: 00:02:26.514 =============== 00:02:26.514 Drivers Enabled 00:02:26.514 =============== 00:02:26.514 00:02:26.514 common: 00:02:26.514 00:02:26.514 bus: 00:02:26.514 pci, vdev, 00:02:26.514 mempool: 00:02:26.514 ring, 00:02:26.514 dma: 00:02:26.514 00:02:26.515 net: 00:02:26.515 00:02:26.515 crypto: 00:02:26.515 00:02:26.515 compress: 00:02:26.515 00:02:26.515 vdpa: 00:02:26.515 00:02:26.515 00:02:26.515 Message: 00:02:26.515 ================= 00:02:26.515 Content Skipped 00:02:26.515 ================= 00:02:26.515 00:02:26.515 apps: 00:02:26.515 dumpcap: explicitly disabled via build config 00:02:26.515 graph: explicitly disabled via build config 00:02:26.515 pdump: explicitly disabled via build config 00:02:26.515 proc-info: explicitly disabled via build config 00:02:26.515 test-acl: explicitly disabled via build config 00:02:26.515 test-bbdev: explicitly disabled via build config 00:02:26.515 test-cmdline: explicitly disabled via build config 00:02:26.515 test-compress-perf: explicitly disabled via build config 00:02:26.515 test-crypto-perf: explicitly disabled via build config 00:02:26.515 test-dma-perf: explicitly disabled via build config 00:02:26.515 test-eventdev: explicitly disabled via build config 00:02:26.515 test-fib: explicitly disabled via build config 00:02:26.515 test-flow-perf: explicitly disabled via build config 00:02:26.515 test-gpudev: explicitly disabled via build config 00:02:26.515 test-mldev: explicitly disabled via build config 00:02:26.515 test-pipeline: explicitly disabled via build config 00:02:26.515 test-pmd: explicitly disabled via build config 00:02:26.515 test-regex: explicitly disabled via build config 00:02:26.515 test-sad: explicitly disabled via build config 00:02:26.515 test-security-perf: explicitly disabled via build config 00:02:26.515 00:02:26.515 libs: 00:02:26.515 argparse: explicitly disabled via build config 00:02:26.515 metrics: explicitly disabled via build config 00:02:26.515 acl: explicitly disabled via build config 00:02:26.515 bbdev: explicitly disabled via build config 00:02:26.515 bitratestats: explicitly disabled via build config 00:02:26.515 bpf: explicitly disabled via build config 00:02:26.515 cfgfile: explicitly disabled via build config 00:02:26.515 distributor: explicitly disabled via build config 00:02:26.515 efd: explicitly disabled via build config 00:02:26.515 eventdev: explicitly disabled via build config 00:02:26.515 dispatcher: explicitly disabled via build config 00:02:26.515 gpudev: explicitly disabled via build config 00:02:26.515 gro: explicitly disabled via build config 00:02:26.515 gso: explicitly disabled via build config 00:02:26.515 ip_frag: explicitly disabled via build config 00:02:26.515 jobstats: explicitly disabled via build config 00:02:26.515 latencystats: explicitly disabled via build config 00:02:26.515 lpm: explicitly disabled via build config 00:02:26.515 member: explicitly disabled via build config 00:02:26.515 pcapng: explicitly disabled via build config 00:02:26.515 rawdev: explicitly disabled via build config 00:02:26.515 regexdev: explicitly disabled via build config 00:02:26.515 mldev: explicitly disabled via build config 00:02:26.515 rib: explicitly disabled via build config 00:02:26.515 sched: explicitly disabled via build config 00:02:26.515 stack: explicitly disabled via build config 00:02:26.515 ipsec: explicitly disabled via build config 00:02:26.515 pdcp: explicitly disabled via build config 00:02:26.515 fib: explicitly disabled via build config 00:02:26.515 port: explicitly disabled via build config 00:02:26.515 pdump: explicitly disabled via build config 00:02:26.515 table: explicitly disabled via build config 00:02:26.515 pipeline: explicitly disabled via build config 00:02:26.515 graph: explicitly disabled via build config 00:02:26.515 node: explicitly disabled via build config 00:02:26.515 00:02:26.515 drivers: 00:02:26.515 common/cpt: not in enabled drivers build config 00:02:26.515 common/dpaax: not in enabled drivers build config 00:02:26.515 common/iavf: not in enabled drivers build config 00:02:26.515 common/idpf: not in enabled drivers build config 00:02:26.515 common/ionic: not in enabled drivers build config 00:02:26.515 common/mvep: not in enabled drivers build config 00:02:26.515 common/octeontx: not in enabled drivers build config 00:02:26.515 bus/auxiliary: not in enabled drivers build config 00:02:26.515 bus/cdx: not in enabled drivers build config 00:02:26.515 bus/dpaa: not in enabled drivers build config 00:02:26.515 bus/fslmc: not in enabled drivers build config 00:02:26.515 bus/ifpga: not in enabled drivers build config 00:02:26.515 bus/platform: not in enabled drivers build config 00:02:26.515 bus/uacce: not in enabled drivers build config 00:02:26.515 bus/vmbus: not in enabled drivers build config 00:02:26.515 common/cnxk: not in enabled drivers build config 00:02:26.515 common/mlx5: not in enabled drivers build config 00:02:26.515 common/nfp: not in enabled drivers build config 00:02:26.515 common/nitrox: not in enabled drivers build config 00:02:26.515 common/qat: not in enabled drivers build config 00:02:26.515 common/sfc_efx: not in enabled drivers build config 00:02:26.515 mempool/bucket: not in enabled drivers build config 00:02:26.515 mempool/cnxk: not in enabled drivers build config 00:02:26.515 mempool/dpaa: not in enabled drivers build config 00:02:26.515 mempool/dpaa2: not in enabled drivers build config 00:02:26.515 mempool/octeontx: not in enabled drivers build config 00:02:26.515 mempool/stack: not in enabled drivers build config 00:02:26.515 dma/cnxk: not in enabled drivers build config 00:02:26.515 dma/dpaa: not in enabled drivers build config 00:02:26.515 dma/dpaa2: not in enabled drivers build config 00:02:26.515 dma/hisilicon: not in enabled drivers build config 00:02:26.515 dma/idxd: not in enabled drivers build config 00:02:26.515 dma/ioat: not in enabled drivers build config 00:02:26.515 dma/skeleton: not in enabled drivers build config 00:02:26.515 net/af_packet: not in enabled drivers build config 00:02:26.515 net/af_xdp: not in enabled drivers build config 00:02:26.515 net/ark: not in enabled drivers build config 00:02:26.515 net/atlantic: not in enabled drivers build config 00:02:26.515 net/avp: not in enabled drivers build config 00:02:26.515 net/axgbe: not in enabled drivers build config 00:02:26.515 net/bnx2x: not in enabled drivers build config 00:02:26.515 net/bnxt: not in enabled drivers build config 00:02:26.515 net/bonding: not in enabled drivers build config 00:02:26.515 net/cnxk: not in enabled drivers build config 00:02:26.515 net/cpfl: not in enabled drivers build config 00:02:26.515 net/cxgbe: not in enabled drivers build config 00:02:26.515 net/dpaa: not in enabled drivers build config 00:02:26.515 net/dpaa2: not in enabled drivers build config 00:02:26.515 net/e1000: not in enabled drivers build config 00:02:26.515 net/ena: not in enabled drivers build config 00:02:26.515 net/enetc: not in enabled drivers build config 00:02:26.515 net/enetfec: not in enabled drivers build config 00:02:26.515 net/enic: not in enabled drivers build config 00:02:26.515 net/failsafe: not in enabled drivers build config 00:02:26.515 net/fm10k: not in enabled drivers build config 00:02:26.515 net/gve: not in enabled drivers build config 00:02:26.515 net/hinic: not in enabled drivers build config 00:02:26.515 net/hns3: not in enabled drivers build config 00:02:26.515 net/i40e: not in enabled drivers build config 00:02:26.515 net/iavf: not in enabled drivers build config 00:02:26.515 net/ice: not in enabled drivers build config 00:02:26.515 net/idpf: not in enabled drivers build config 00:02:26.515 net/igc: not in enabled drivers build config 00:02:26.515 net/ionic: not in enabled drivers build config 00:02:26.515 net/ipn3ke: not in enabled drivers build config 00:02:26.515 net/ixgbe: not in enabled drivers build config 00:02:26.515 net/mana: not in enabled drivers build config 00:02:26.515 net/memif: not in enabled drivers build config 00:02:26.515 net/mlx4: not in enabled drivers build config 00:02:26.515 net/mlx5: not in enabled drivers build config 00:02:26.515 net/mvneta: not in enabled drivers build config 00:02:26.515 net/mvpp2: not in enabled drivers build config 00:02:26.515 net/netvsc: not in enabled drivers build config 00:02:26.515 net/nfb: not in enabled drivers build config 00:02:26.515 net/nfp: not in enabled drivers build config 00:02:26.515 net/ngbe: not in enabled drivers build config 00:02:26.515 net/null: not in enabled drivers build config 00:02:26.515 net/octeontx: not in enabled drivers build config 00:02:26.515 net/octeon_ep: not in enabled drivers build config 00:02:26.515 net/pcap: not in enabled drivers build config 00:02:26.515 net/pfe: not in enabled drivers build config 00:02:26.515 net/qede: not in enabled drivers build config 00:02:26.515 net/ring: not in enabled drivers build config 00:02:26.515 net/sfc: not in enabled drivers build config 00:02:26.515 net/softnic: not in enabled drivers build config 00:02:26.515 net/tap: not in enabled drivers build config 00:02:26.515 net/thunderx: not in enabled drivers build config 00:02:26.515 net/txgbe: not in enabled drivers build config 00:02:26.515 net/vdev_netvsc: not in enabled drivers build config 00:02:26.515 net/vhost: not in enabled drivers build config 00:02:26.515 net/virtio: not in enabled drivers build config 00:02:26.515 net/vmxnet3: not in enabled drivers build config 00:02:26.515 raw/*: missing internal dependency, "rawdev" 00:02:26.515 crypto/armv8: not in enabled drivers build config 00:02:26.515 crypto/bcmfs: not in enabled drivers build config 00:02:26.515 crypto/caam_jr: not in enabled drivers build config 00:02:26.515 crypto/ccp: not in enabled drivers build config 00:02:26.515 crypto/cnxk: not in enabled drivers build config 00:02:26.515 crypto/dpaa_sec: not in enabled drivers build config 00:02:26.515 crypto/dpaa2_sec: not in enabled drivers build config 00:02:26.515 crypto/ipsec_mb: not in enabled drivers build config 00:02:26.515 crypto/mlx5: not in enabled drivers build config 00:02:26.515 crypto/mvsam: not in enabled drivers build config 00:02:26.515 crypto/nitrox: not in enabled drivers build config 00:02:26.515 crypto/null: not in enabled drivers build config 00:02:26.515 crypto/octeontx: not in enabled drivers build config 00:02:26.515 crypto/openssl: not in enabled drivers build config 00:02:26.515 crypto/scheduler: not in enabled drivers build config 00:02:26.515 crypto/uadk: not in enabled drivers build config 00:02:26.515 crypto/virtio: not in enabled drivers build config 00:02:26.515 compress/isal: not in enabled drivers build config 00:02:26.515 compress/mlx5: not in enabled drivers build config 00:02:26.515 compress/nitrox: not in enabled drivers build config 00:02:26.515 compress/octeontx: not in enabled drivers build config 00:02:26.515 compress/zlib: not in enabled drivers build config 00:02:26.515 regex/*: missing internal dependency, "regexdev" 00:02:26.515 ml/*: missing internal dependency, "mldev" 00:02:26.515 vdpa/ifc: not in enabled drivers build config 00:02:26.515 vdpa/mlx5: not in enabled drivers build config 00:02:26.516 vdpa/nfp: not in enabled drivers build config 00:02:26.516 vdpa/sfc: not in enabled drivers build config 00:02:26.516 event/*: missing internal dependency, "eventdev" 00:02:26.516 baseband/*: missing internal dependency, "bbdev" 00:02:26.516 gpu/*: missing internal dependency, "gpudev" 00:02:26.516 00:02:26.516 00:02:26.516 Build targets in project: 84 00:02:26.516 00:02:26.516 DPDK 24.03.0 00:02:26.516 00:02:26.516 User defined options 00:02:26.516 buildtype : debug 00:02:26.516 default_library : shared 00:02:26.516 libdir : lib 00:02:26.516 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:26.516 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:26.516 c_link_args : 00:02:26.516 cpu_instruction_set: native 00:02:26.516 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:26.516 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:26.516 enable_docs : false 00:02:26.516 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:26.516 enable_kmods : false 00:02:26.516 max_lcores : 128 00:02:26.516 tests : false 00:02:26.516 00:02:26.516 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.516 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:26.516 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:26.778 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:26.778 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:26.778 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:26.778 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:26.778 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:26.778 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:26.778 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:26.778 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:26.778 [10/267] Linking static target lib/librte_kvargs.a 00:02:26.778 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:26.778 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:26.778 [13/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:26.778 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.778 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:26.778 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:26.778 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.778 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:26.778 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.778 [20/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:26.778 [21/267] Linking static target lib/librte_log.a 00:02:26.778 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:26.778 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:26.778 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:26.778 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:26.778 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:26.778 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:26.778 [28/267] Linking static target lib/librte_pci.a 00:02:26.778 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:26.778 [30/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:27.036 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:27.037 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:27.037 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:27.037 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:27.037 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.037 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:27.037 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:27.037 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.037 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:27.037 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:27.296 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.296 [42/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.296 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:27.296 [44/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:27.296 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:27.296 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:27.296 [47/267] Linking static target lib/librte_ring.a 00:02:27.296 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:27.296 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:27.296 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:27.296 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:27.296 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:27.296 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:27.296 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:27.296 [55/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:27.296 [56/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:27.296 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:27.296 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:27.296 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:27.296 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:27.296 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:27.296 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:27.296 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:27.296 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:27.296 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.296 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:27.296 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:27.296 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.296 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:27.296 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:27.296 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:27.296 [72/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.296 [73/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:27.296 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:27.296 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:27.296 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:27.296 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:27.296 [78/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:27.296 [79/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:27.296 [80/267] Linking static target lib/librte_timer.a 00:02:27.296 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.296 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.296 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:27.296 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.296 [85/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:27.296 [86/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:27.296 [87/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:27.296 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:27.296 [89/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.296 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.296 [91/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.296 [92/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:27.296 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:27.296 [94/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:27.296 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:27.296 [96/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:27.296 [97/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:27.296 [98/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:27.296 [99/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:27.296 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:27.296 [101/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.296 [102/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:27.296 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.296 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:27.296 [105/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.296 [106/267] Linking static target lib/librte_telemetry.a 00:02:27.296 [107/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:27.296 [108/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:27.296 [109/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:27.297 [110/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:27.297 [111/267] Linking static target lib/librte_meter.a 00:02:27.297 [112/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:27.297 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:27.297 [114/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:27.297 [115/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:27.297 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:27.297 [117/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:27.297 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:27.297 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:27.297 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:27.297 [121/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.297 [122/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:27.297 [123/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:27.297 [124/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:27.297 [125/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:27.297 [126/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.297 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:27.297 [128/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:27.297 [129/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:27.297 [130/267] Linking static target lib/librte_cmdline.a 00:02:27.297 [131/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:27.297 [132/267] Linking static target lib/librte_mempool.a 00:02:27.297 [133/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:27.297 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:27.297 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:27.297 [136/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:27.297 [137/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.297 [138/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.297 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:27.297 [140/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:27.297 [141/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:27.297 [142/267] Linking static target lib/librte_net.a 00:02:27.297 [143/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.557 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.557 [145/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:27.557 [146/267] Linking static target lib/librte_rcu.a 00:02:27.557 [147/267] Linking static target lib/librte_dmadev.a 00:02:27.557 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:27.557 [149/267] Linking static target lib/librte_security.a 00:02:27.557 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:27.557 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:27.557 [152/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:27.557 [153/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:27.557 [154/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:27.557 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:27.557 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:27.557 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:27.557 [158/267] Linking static target lib/librte_power.a 00:02:27.557 [159/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:27.557 [160/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:27.557 [161/267] Linking target lib/librte_log.so.24.1 00:02:27.557 [162/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:27.557 [163/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:27.557 [164/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:27.557 [165/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:27.557 [166/267] Linking static target lib/librte_reorder.a 00:02:27.557 [167/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:27.557 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:27.557 [169/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:27.557 [170/267] Linking static target lib/librte_eal.a 00:02:27.557 [171/267] Linking static target lib/librte_compressdev.a 00:02:27.557 [172/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.557 [173/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:27.557 [174/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:27.557 [175/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:27.557 [176/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:27.557 [177/267] Linking static target lib/librte_hash.a 00:02:27.557 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:27.557 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:27.557 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:27.557 [181/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:27.557 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:27.557 [183/267] Linking static target lib/librte_mbuf.a 00:02:27.557 [184/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:27.557 [185/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:27.557 [186/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.557 [187/267] Linking target lib/librte_kvargs.so.24.1 00:02:27.557 [188/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:27.557 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:27.557 [190/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.557 [191/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.557 [192/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:27.557 [193/267] Linking static target drivers/librte_bus_vdev.a 00:02:27.818 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.818 [195/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:27.818 [196/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.818 [197/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:27.818 [198/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:27.818 [199/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.818 [200/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.818 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.818 [202/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.818 [203/267] Linking static target drivers/librte_bus_pci.a 00:02:27.818 [204/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.818 [205/267] Linking static target drivers/librte_mempool_ring.a 00:02:27.818 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.818 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:27.818 [208/267] Linking static target lib/librte_cryptodev.a 00:02:27.818 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.818 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.078 [211/267] Linking target lib/librte_telemetry.so.24.1 00:02:28.078 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.078 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.078 [214/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.078 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:28.078 [216/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.338 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.338 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.338 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.338 [220/267] Linking static target lib/librte_ethdev.a 00:02:28.338 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.599 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.600 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.600 [224/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.600 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.861 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.432 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:29.432 [228/267] Linking static target lib/librte_vhost.a 00:02:30.003 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.389 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.066 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.009 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.009 [233/267] Linking target lib/librte_eal.so.24.1 00:02:39.009 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:39.270 [235/267] Linking target lib/librte_ring.so.24.1 00:02:39.270 [236/267] Linking target lib/librte_dmadev.so.24.1 00:02:39.270 [237/267] Linking target lib/librte_meter.so.24.1 00:02:39.270 [238/267] Linking target lib/librte_timer.so.24.1 00:02:39.270 [239/267] Linking target lib/librte_pci.so.24.1 00:02:39.270 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:39.270 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:39.270 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:39.270 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:39.270 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:39.270 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:39.270 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:39.270 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:39.270 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:39.536 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:39.536 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:39.536 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:39.536 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:39.536 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:39.797 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:39.797 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:39.797 [256/267] Linking target lib/librte_net.so.24.1 00:02:39.797 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:39.797 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:39.797 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:39.797 [260/267] Linking target lib/librte_hash.so.24.1 00:02:39.797 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:39.797 [262/267] Linking target lib/librte_security.so.24.1 00:02:39.797 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:40.058 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:40.058 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:40.058 [266/267] Linking target lib/librte_power.so.24.1 00:02:40.058 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:40.058 INFO: autodetecting backend as ninja 00:02:40.058 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:44.285 CC lib/log/log.o 00:02:44.285 CC lib/log/log_flags.o 00:02:44.286 CC lib/log/log_deprecated.o 00:02:44.286 CC lib/ut_mock/mock.o 00:02:44.286 CC lib/ut/ut.o 00:02:44.286 LIB libspdk_ut_mock.a 00:02:44.286 LIB libspdk_log.a 00:02:44.286 LIB libspdk_ut.a 00:02:44.286 SO libspdk_log.so.7.1 00:02:44.286 SO libspdk_ut_mock.so.6.0 00:02:44.286 SO libspdk_ut.so.2.0 00:02:44.286 SYMLINK libspdk_ut_mock.so 00:02:44.286 SYMLINK libspdk_log.so 00:02:44.286 SYMLINK libspdk_ut.so 00:02:44.286 CC lib/util/base64.o 00:02:44.286 CC lib/util/bit_array.o 00:02:44.286 CC lib/util/cpuset.o 00:02:44.286 CC lib/util/crc16.o 00:02:44.286 CC lib/util/crc32.o 00:02:44.286 CC lib/util/crc64.o 00:02:44.286 CC lib/util/crc32c.o 00:02:44.286 CC lib/util/crc32_ieee.o 00:02:44.286 CC lib/util/fd_group.o 00:02:44.286 CC lib/util/dif.o 00:02:44.286 CC lib/util/fd.o 00:02:44.286 CC lib/util/file.o 00:02:44.286 CC lib/util/hexlify.o 00:02:44.286 CC lib/ioat/ioat.o 00:02:44.286 CC lib/util/iov.o 00:02:44.286 CC lib/dma/dma.o 00:02:44.286 CC lib/util/math.o 00:02:44.286 CC lib/util/net.o 00:02:44.286 CC lib/util/pipe.o 00:02:44.286 CC lib/util/strerror_tls.o 00:02:44.286 CC lib/util/string.o 00:02:44.286 CC lib/util/uuid.o 00:02:44.286 CC lib/util/xor.o 00:02:44.286 CXX lib/trace_parser/trace.o 00:02:44.286 CC lib/util/zipf.o 00:02:44.286 CC lib/util/md5.o 00:02:44.286 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.286 CC lib/vfio_user/host/vfio_user.o 00:02:44.547 LIB libspdk_dma.a 00:02:44.547 SO libspdk_dma.so.5.0 00:02:44.547 LIB libspdk_ioat.a 00:02:44.547 SO libspdk_ioat.so.7.0 00:02:44.547 SYMLINK libspdk_dma.so 00:02:44.547 SYMLINK libspdk_ioat.so 00:02:44.547 LIB libspdk_vfio_user.a 00:02:44.547 SO libspdk_vfio_user.so.5.0 00:02:44.808 SYMLINK libspdk_vfio_user.so 00:02:44.808 LIB libspdk_util.a 00:02:44.808 SO libspdk_util.so.10.1 00:02:44.808 SYMLINK libspdk_util.so 00:02:45.070 LIB libspdk_trace_parser.a 00:02:45.070 SO libspdk_trace_parser.so.6.0 00:02:45.070 SYMLINK libspdk_trace_parser.so 00:02:45.330 CC lib/conf/conf.o 00:02:45.330 CC lib/rdma_utils/rdma_utils.o 00:02:45.330 CC lib/env_dpdk/env.o 00:02:45.330 CC lib/env_dpdk/memory.o 00:02:45.330 CC lib/idxd/idxd.o 00:02:45.330 CC lib/env_dpdk/pci.o 00:02:45.330 CC lib/json/json_parse.o 00:02:45.330 CC lib/env_dpdk/init.o 00:02:45.330 CC lib/json/json_util.o 00:02:45.330 CC lib/env_dpdk/threads.o 00:02:45.330 CC lib/json/json_write.o 00:02:45.330 CC lib/env_dpdk/pci_ioat.o 00:02:45.330 CC lib/idxd/idxd_user.o 00:02:45.330 CC lib/env_dpdk/pci_virtio.o 00:02:45.330 CC lib/idxd/idxd_kernel.o 00:02:45.330 CC lib/vmd/vmd.o 00:02:45.330 CC lib/env_dpdk/pci_vmd.o 00:02:45.330 CC lib/env_dpdk/pci_idxd.o 00:02:45.330 CC lib/vmd/led.o 00:02:45.330 CC lib/env_dpdk/pci_event.o 00:02:45.330 CC lib/env_dpdk/sigbus_handler.o 00:02:45.330 CC lib/env_dpdk/pci_dpdk.o 00:02:45.330 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:45.330 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:45.590 LIB libspdk_conf.a 00:02:45.590 LIB libspdk_rdma_utils.a 00:02:45.590 SO libspdk_conf.so.6.0 00:02:45.590 LIB libspdk_json.a 00:02:45.590 SO libspdk_rdma_utils.so.1.0 00:02:45.590 SO libspdk_json.so.6.0 00:02:45.590 SYMLINK libspdk_conf.so 00:02:45.590 SYMLINK libspdk_rdma_utils.so 00:02:45.859 SYMLINK libspdk_json.so 00:02:45.859 LIB libspdk_idxd.a 00:02:45.859 SO libspdk_idxd.so.12.1 00:02:45.859 LIB libspdk_vmd.a 00:02:45.859 SO libspdk_vmd.so.6.0 00:02:45.859 SYMLINK libspdk_idxd.so 00:02:46.119 SYMLINK libspdk_vmd.so 00:02:46.119 CC lib/rdma_provider/common.o 00:02:46.119 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:46.119 CC lib/jsonrpc/jsonrpc_server.o 00:02:46.119 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:46.119 CC lib/jsonrpc/jsonrpc_client.o 00:02:46.119 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:46.379 LIB libspdk_rdma_provider.a 00:02:46.379 SO libspdk_rdma_provider.so.7.0 00:02:46.379 LIB libspdk_jsonrpc.a 00:02:46.379 SO libspdk_jsonrpc.so.6.0 00:02:46.379 SYMLINK libspdk_rdma_provider.so 00:02:46.379 SYMLINK libspdk_jsonrpc.so 00:02:46.640 LIB libspdk_env_dpdk.a 00:02:46.640 SO libspdk_env_dpdk.so.15.1 00:02:46.640 SYMLINK libspdk_env_dpdk.so 00:02:46.899 CC lib/rpc/rpc.o 00:02:46.899 LIB libspdk_rpc.a 00:02:47.159 SO libspdk_rpc.so.6.0 00:02:47.159 SYMLINK libspdk_rpc.so 00:02:47.420 CC lib/trace/trace.o 00:02:47.420 CC lib/trace/trace_flags.o 00:02:47.420 CC lib/trace/trace_rpc.o 00:02:47.420 CC lib/keyring/keyring.o 00:02:47.420 CC lib/keyring/keyring_rpc.o 00:02:47.420 CC lib/notify/notify.o 00:02:47.420 CC lib/notify/notify_rpc.o 00:02:47.680 LIB libspdk_notify.a 00:02:47.680 SO libspdk_notify.so.6.0 00:02:47.680 LIB libspdk_keyring.a 00:02:47.680 LIB libspdk_trace.a 00:02:47.680 SO libspdk_keyring.so.2.0 00:02:47.680 SO libspdk_trace.so.11.0 00:02:47.680 SYMLINK libspdk_notify.so 00:02:47.940 SYMLINK libspdk_keyring.so 00:02:47.940 SYMLINK libspdk_trace.so 00:02:48.199 CC lib/sock/sock.o 00:02:48.199 CC lib/sock/sock_rpc.o 00:02:48.199 CC lib/thread/thread.o 00:02:48.199 CC lib/thread/iobuf.o 00:02:48.461 LIB libspdk_sock.a 00:02:48.461 SO libspdk_sock.so.10.0 00:02:48.721 SYMLINK libspdk_sock.so 00:02:48.983 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:48.983 CC lib/nvme/nvme_ctrlr.o 00:02:48.983 CC lib/nvme/nvme_fabric.o 00:02:48.983 CC lib/nvme/nvme_ns_cmd.o 00:02:48.983 CC lib/nvme/nvme_pcie.o 00:02:48.983 CC lib/nvme/nvme_ns.o 00:02:48.983 CC lib/nvme/nvme_pcie_common.o 00:02:48.983 CC lib/nvme/nvme_qpair.o 00:02:48.983 CC lib/nvme/nvme.o 00:02:48.983 CC lib/nvme/nvme_quirks.o 00:02:48.983 CC lib/nvme/nvme_transport.o 00:02:48.983 CC lib/nvme/nvme_discovery.o 00:02:48.983 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:48.983 CC lib/nvme/nvme_opal.o 00:02:48.983 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:48.983 CC lib/nvme/nvme_tcp.o 00:02:48.983 CC lib/nvme/nvme_io_msg.o 00:02:48.983 CC lib/nvme/nvme_auth.o 00:02:48.983 CC lib/nvme/nvme_poll_group.o 00:02:48.983 CC lib/nvme/nvme_zns.o 00:02:48.983 CC lib/nvme/nvme_stubs.o 00:02:48.983 CC lib/nvme/nvme_cuse.o 00:02:48.983 CC lib/nvme/nvme_vfio_user.o 00:02:48.983 CC lib/nvme/nvme_rdma.o 00:02:49.555 LIB libspdk_thread.a 00:02:49.555 SO libspdk_thread.so.11.0 00:02:49.555 SYMLINK libspdk_thread.so 00:02:49.816 CC lib/accel/accel_rpc.o 00:02:49.816 CC lib/vfu_tgt/tgt_endpoint.o 00:02:49.816 CC lib/accel/accel.o 00:02:49.816 CC lib/vfu_tgt/tgt_rpc.o 00:02:49.816 CC lib/accel/accel_sw.o 00:02:49.816 CC lib/fsdev/fsdev_io.o 00:02:49.816 CC lib/virtio/virtio.o 00:02:49.817 CC lib/fsdev/fsdev.o 00:02:49.817 CC lib/virtio/virtio_vhost_user.o 00:02:49.817 CC lib/init/json_config.o 00:02:49.817 CC lib/virtio/virtio_vfio_user.o 00:02:49.817 CC lib/fsdev/fsdev_rpc.o 00:02:49.817 CC lib/virtio/virtio_pci.o 00:02:49.817 CC lib/init/subsystem.o 00:02:49.817 CC lib/init/subsystem_rpc.o 00:02:49.817 CC lib/init/rpc.o 00:02:49.817 CC lib/blob/blobstore.o 00:02:49.817 CC lib/blob/request.o 00:02:49.817 CC lib/blob/zeroes.o 00:02:49.817 CC lib/blob/blob_bs_dev.o 00:02:50.076 LIB libspdk_init.a 00:02:50.337 SO libspdk_init.so.6.0 00:02:50.337 LIB libspdk_virtio.a 00:02:50.337 LIB libspdk_vfu_tgt.a 00:02:50.337 SO libspdk_virtio.so.7.0 00:02:50.337 SYMLINK libspdk_init.so 00:02:50.337 SO libspdk_vfu_tgt.so.3.0 00:02:50.337 SYMLINK libspdk_virtio.so 00:02:50.337 SYMLINK libspdk_vfu_tgt.so 00:02:50.598 LIB libspdk_fsdev.a 00:02:50.598 SO libspdk_fsdev.so.2.0 00:02:50.598 CC lib/event/app.o 00:02:50.598 CC lib/event/reactor.o 00:02:50.598 CC lib/event/log_rpc.o 00:02:50.598 CC lib/event/app_rpc.o 00:02:50.598 CC lib/event/scheduler_static.o 00:02:50.598 SYMLINK libspdk_fsdev.so 00:02:50.859 LIB libspdk_nvme.a 00:02:50.859 LIB libspdk_accel.a 00:02:50.859 SO libspdk_accel.so.16.0 00:02:51.119 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:51.119 SO libspdk_nvme.so.15.0 00:02:51.119 SYMLINK libspdk_accel.so 00:02:51.119 LIB libspdk_event.a 00:02:51.119 SO libspdk_event.so.14.0 00:02:51.119 SYMLINK libspdk_event.so 00:02:51.380 SYMLINK libspdk_nvme.so 00:02:51.380 CC lib/bdev/bdev.o 00:02:51.380 CC lib/bdev/bdev_rpc.o 00:02:51.380 CC lib/bdev/part.o 00:02:51.380 CC lib/bdev/bdev_zone.o 00:02:51.380 CC lib/bdev/scsi_nvme.o 00:02:51.641 LIB libspdk_fuse_dispatcher.a 00:02:51.641 SO libspdk_fuse_dispatcher.so.1.0 00:02:51.641 SYMLINK libspdk_fuse_dispatcher.so 00:02:52.583 LIB libspdk_blob.a 00:02:52.583 SO libspdk_blob.so.11.0 00:02:52.844 SYMLINK libspdk_blob.so 00:02:53.105 CC lib/blobfs/blobfs.o 00:02:53.105 CC lib/blobfs/tree.o 00:02:53.105 CC lib/lvol/lvol.o 00:02:53.677 LIB libspdk_bdev.a 00:02:53.677 SO libspdk_bdev.so.17.0 00:02:53.940 LIB libspdk_blobfs.a 00:02:53.940 SO libspdk_blobfs.so.10.0 00:02:53.940 SYMLINK libspdk_bdev.so 00:02:53.940 LIB libspdk_lvol.a 00:02:53.940 SYMLINK libspdk_blobfs.so 00:02:53.940 SO libspdk_lvol.so.10.0 00:02:53.940 SYMLINK libspdk_lvol.so 00:02:54.200 CC lib/nbd/nbd.o 00:02:54.200 CC lib/nbd/nbd_rpc.o 00:02:54.200 CC lib/scsi/dev.o 00:02:54.200 CC lib/scsi/lun.o 00:02:54.200 CC lib/scsi/port.o 00:02:54.200 CC lib/scsi/scsi.o 00:02:54.200 CC lib/scsi/scsi_bdev.o 00:02:54.200 CC lib/nvmf/ctrlr.o 00:02:54.200 CC lib/scsi/scsi_pr.o 00:02:54.200 CC lib/ftl/ftl_core.o 00:02:54.200 CC lib/scsi/scsi_rpc.o 00:02:54.200 CC lib/nvmf/ctrlr_discovery.o 00:02:54.200 CC lib/ublk/ublk.o 00:02:54.200 CC lib/nvmf/ctrlr_bdev.o 00:02:54.200 CC lib/ftl/ftl_layout.o 00:02:54.200 CC lib/scsi/task.o 00:02:54.200 CC lib/ftl/ftl_init.o 00:02:54.200 CC lib/ublk/ublk_rpc.o 00:02:54.200 CC lib/nvmf/subsystem.o 00:02:54.200 CC lib/nvmf/nvmf.o 00:02:54.200 CC lib/ftl/ftl_debug.o 00:02:54.200 CC lib/nvmf/nvmf_rpc.o 00:02:54.200 CC lib/ftl/ftl_io.o 00:02:54.200 CC lib/nvmf/transport.o 00:02:54.200 CC lib/ftl/ftl_sb.o 00:02:54.200 CC lib/nvmf/tcp.o 00:02:54.200 CC lib/ftl/ftl_l2p.o 00:02:54.200 CC lib/nvmf/stubs.o 00:02:54.200 CC lib/nvmf/mdns_server.o 00:02:54.200 CC lib/ftl/ftl_l2p_flat.o 00:02:54.200 CC lib/ftl/ftl_nv_cache.o 00:02:54.200 CC lib/nvmf/vfio_user.o 00:02:54.200 CC lib/ftl/ftl_band.o 00:02:54.200 CC lib/nvmf/rdma.o 00:02:54.200 CC lib/ftl/ftl_band_ops.o 00:02:54.200 CC lib/nvmf/auth.o 00:02:54.200 CC lib/ftl/ftl_writer.o 00:02:54.200 CC lib/ftl/ftl_rq.o 00:02:54.200 CC lib/ftl/ftl_reloc.o 00:02:54.200 CC lib/ftl/ftl_l2p_cache.o 00:02:54.200 CC lib/ftl/ftl_p2l.o 00:02:54.200 CC lib/ftl/ftl_p2l_log.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:54.200 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:54.200 CC lib/ftl/utils/ftl_conf.o 00:02:54.200 CC lib/ftl/utils/ftl_md.o 00:02:54.200 CC lib/ftl/utils/ftl_mempool.o 00:02:54.200 CC lib/ftl/utils/ftl_property.o 00:02:54.200 CC lib/ftl/utils/ftl_bitmap.o 00:02:54.200 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:54.200 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:54.200 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:54.200 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:54.200 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:54.200 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:54.200 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:54.200 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:54.200 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:54.200 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:54.200 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:54.200 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:54.200 CC lib/ftl/base/ftl_base_dev.o 00:02:54.200 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:54.200 CC lib/ftl/base/ftl_base_bdev.o 00:02:54.200 CC lib/ftl/ftl_trace.o 00:02:54.768 LIB libspdk_nbd.a 00:02:54.768 SO libspdk_nbd.so.7.0 00:02:54.768 LIB libspdk_scsi.a 00:02:54.768 SYMLINK libspdk_nbd.so 00:02:54.768 SO libspdk_scsi.so.9.0 00:02:55.028 LIB libspdk_ublk.a 00:02:55.028 SYMLINK libspdk_scsi.so 00:02:55.028 SO libspdk_ublk.so.3.0 00:02:55.028 SYMLINK libspdk_ublk.so 00:02:55.289 LIB libspdk_ftl.a 00:02:55.289 CC lib/vhost/vhost.o 00:02:55.289 CC lib/vhost/vhost_rpc.o 00:02:55.289 CC lib/vhost/vhost_scsi.o 00:02:55.289 CC lib/vhost/vhost_blk.o 00:02:55.289 CC lib/vhost/rte_vhost_user.o 00:02:55.289 CC lib/iscsi/conn.o 00:02:55.289 CC lib/iscsi/init_grp.o 00:02:55.289 CC lib/iscsi/iscsi.o 00:02:55.289 CC lib/iscsi/param.o 00:02:55.289 CC lib/iscsi/portal_grp.o 00:02:55.289 CC lib/iscsi/tgt_node.o 00:02:55.289 CC lib/iscsi/iscsi_subsystem.o 00:02:55.289 CC lib/iscsi/iscsi_rpc.o 00:02:55.289 CC lib/iscsi/task.o 00:02:55.289 SO libspdk_ftl.so.9.0 00:02:55.551 SYMLINK libspdk_ftl.so 00:02:56.124 LIB libspdk_nvmf.a 00:02:56.124 SO libspdk_nvmf.so.20.0 00:02:56.124 LIB libspdk_vhost.a 00:02:56.385 SO libspdk_vhost.so.8.0 00:02:56.385 SYMLINK libspdk_nvmf.so 00:02:56.385 SYMLINK libspdk_vhost.so 00:02:56.385 LIB libspdk_iscsi.a 00:02:56.646 SO libspdk_iscsi.so.8.0 00:02:56.646 SYMLINK libspdk_iscsi.so 00:02:57.216 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.216 CC module/vfu_device/vfu_virtio.o 00:02:57.216 CC module/vfu_device/vfu_virtio_blk.o 00:02:57.216 CC module/vfu_device/vfu_virtio_scsi.o 00:02:57.216 CC module/vfu_device/vfu_virtio_rpc.o 00:02:57.216 CC module/vfu_device/vfu_virtio_fs.o 00:02:57.478 LIB libspdk_env_dpdk_rpc.a 00:02:57.478 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.478 CC module/sock/posix/posix.o 00:02:57.478 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.478 CC module/keyring/file/keyring.o 00:02:57.478 CC module/keyring/file/keyring_rpc.o 00:02:57.478 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.478 CC module/fsdev/aio/fsdev_aio.o 00:02:57.478 CC module/accel/dsa/accel_dsa.o 00:02:57.478 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:57.478 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.478 CC module/fsdev/aio/linux_aio_mgr.o 00:02:57.478 CC module/accel/ioat/accel_ioat.o 00:02:57.478 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.478 CC module/accel/error/accel_error.o 00:02:57.478 CC module/accel/error/accel_error_rpc.o 00:02:57.478 CC module/keyring/linux/keyring.o 00:02:57.478 CC module/accel/iaa/accel_iaa.o 00:02:57.478 CC module/keyring/linux/keyring_rpc.o 00:02:57.478 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.478 CC module/blob/bdev/blob_bdev.o 00:02:57.478 SO libspdk_env_dpdk_rpc.so.6.0 00:02:57.478 SYMLINK libspdk_env_dpdk_rpc.so 00:02:57.739 LIB libspdk_keyring_file.a 00:02:57.739 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.739 LIB libspdk_keyring_linux.a 00:02:57.739 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:57.739 SO libspdk_keyring_file.so.2.0 00:02:57.739 LIB libspdk_scheduler_gscheduler.a 00:02:57.739 LIB libspdk_scheduler_dynamic.a 00:02:57.739 SO libspdk_keyring_linux.so.1.0 00:02:57.739 LIB libspdk_accel_ioat.a 00:02:57.739 SO libspdk_scheduler_gscheduler.so.4.0 00:02:57.739 LIB libspdk_accel_iaa.a 00:02:57.739 SO libspdk_scheduler_dynamic.so.4.0 00:02:57.739 LIB libspdk_accel_error.a 00:02:57.739 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.739 SO libspdk_accel_ioat.so.6.0 00:02:57.739 SYMLINK libspdk_keyring_file.so 00:02:57.739 SO libspdk_accel_iaa.so.3.0 00:02:57.739 SO libspdk_accel_error.so.2.0 00:02:57.739 SYMLINK libspdk_keyring_linux.so 00:02:57.739 LIB libspdk_blob_bdev.a 00:02:57.739 LIB libspdk_accel_dsa.a 00:02:57.739 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.739 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.739 SYMLINK libspdk_accel_ioat.so 00:02:57.739 SO libspdk_accel_dsa.so.5.0 00:02:57.739 SO libspdk_blob_bdev.so.11.0 00:02:57.739 SYMLINK libspdk_accel_iaa.so 00:02:57.739 SYMLINK libspdk_accel_error.so 00:02:57.739 LIB libspdk_vfu_device.a 00:02:58.001 SYMLINK libspdk_accel_dsa.so 00:02:58.001 SO libspdk_vfu_device.so.3.0 00:02:58.001 SYMLINK libspdk_blob_bdev.so 00:02:58.001 SYMLINK libspdk_vfu_device.so 00:02:58.001 LIB libspdk_fsdev_aio.a 00:02:58.262 SO libspdk_fsdev_aio.so.1.0 00:02:58.262 LIB libspdk_sock_posix.a 00:02:58.262 SO libspdk_sock_posix.so.6.0 00:02:58.262 SYMLINK libspdk_fsdev_aio.so 00:02:58.262 SYMLINK libspdk_sock_posix.so 00:02:58.521 CC module/bdev/error/vbdev_error.o 00:02:58.521 CC module/bdev/malloc/bdev_malloc.o 00:02:58.521 CC module/bdev/error/vbdev_error_rpc.o 00:02:58.521 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:58.521 CC module/blobfs/bdev/blobfs_bdev.o 00:02:58.521 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:58.521 CC module/bdev/null/bdev_null.o 00:02:58.521 CC module/bdev/null/bdev_null_rpc.o 00:02:58.521 CC module/bdev/delay/vbdev_delay.o 00:02:58.521 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:58.521 CC module/bdev/lvol/vbdev_lvol.o 00:02:58.521 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:58.521 CC module/bdev/raid/bdev_raid.o 00:02:58.521 CC module/bdev/raid/bdev_raid_rpc.o 00:02:58.521 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.521 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.521 CC module/bdev/raid/raid0.o 00:02:58.521 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.521 CC module/bdev/raid/raid1.o 00:02:58.521 CC module/bdev/gpt/gpt.o 00:02:58.521 CC module/bdev/raid/concat.o 00:02:58.521 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.521 CC module/bdev/gpt/vbdev_gpt.o 00:02:58.521 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.521 CC module/bdev/split/vbdev_split.o 00:02:58.521 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.521 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.521 CC module/bdev/ftl/bdev_ftl.o 00:02:58.521 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:58.521 CC module/bdev/nvme/bdev_nvme.o 00:02:58.521 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:58.521 CC module/bdev/passthru/vbdev_passthru.o 00:02:58.522 CC module/bdev/nvme/nvme_rpc.o 00:02:58.522 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:58.522 CC module/bdev/nvme/bdev_mdns_client.o 00:02:58.522 CC module/bdev/aio/bdev_aio.o 00:02:58.522 CC module/bdev/nvme/vbdev_opal.o 00:02:58.522 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.522 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.522 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.522 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.522 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.784 LIB libspdk_blobfs_bdev.a 00:02:58.784 SO libspdk_blobfs_bdev.so.6.0 00:02:58.784 LIB libspdk_bdev_null.a 00:02:58.784 LIB libspdk_bdev_error.a 00:02:58.784 LIB libspdk_bdev_split.a 00:02:58.784 LIB libspdk_bdev_gpt.a 00:02:58.784 SO libspdk_bdev_null.so.6.0 00:02:58.784 SO libspdk_bdev_error.so.6.0 00:02:58.784 SYMLINK libspdk_blobfs_bdev.so 00:02:58.784 LIB libspdk_bdev_ftl.a 00:02:58.784 SO libspdk_bdev_split.so.6.0 00:02:58.784 LIB libspdk_bdev_passthru.a 00:02:58.784 SO libspdk_bdev_gpt.so.6.0 00:02:58.784 SO libspdk_bdev_ftl.so.6.0 00:02:58.784 LIB libspdk_bdev_aio.a 00:02:58.784 LIB libspdk_bdev_malloc.a 00:02:58.784 SYMLINK libspdk_bdev_null.so 00:02:58.784 SO libspdk_bdev_passthru.so.6.0 00:02:58.784 SYMLINK libspdk_bdev_error.so 00:02:58.784 LIB libspdk_bdev_zone_block.a 00:02:58.784 SYMLINK libspdk_bdev_split.so 00:02:58.784 LIB libspdk_bdev_delay.a 00:02:58.784 SYMLINK libspdk_bdev_gpt.so 00:02:58.784 LIB libspdk_bdev_iscsi.a 00:02:58.784 SO libspdk_bdev_aio.so.6.0 00:02:58.784 SO libspdk_bdev_malloc.so.6.0 00:02:58.784 SO libspdk_bdev_zone_block.so.6.0 00:02:58.784 SO libspdk_bdev_delay.so.6.0 00:02:58.784 SYMLINK libspdk_bdev_ftl.so 00:02:58.784 SYMLINK libspdk_bdev_passthru.so 00:02:58.784 SO libspdk_bdev_iscsi.so.6.0 00:02:59.045 SYMLINK libspdk_bdev_aio.so 00:02:59.045 SYMLINK libspdk_bdev_malloc.so 00:02:59.045 SYMLINK libspdk_bdev_delay.so 00:02:59.045 LIB libspdk_bdev_lvol.a 00:02:59.045 SYMLINK libspdk_bdev_zone_block.so 00:02:59.045 SYMLINK libspdk_bdev_iscsi.so 00:02:59.045 LIB libspdk_bdev_virtio.a 00:02:59.045 SO libspdk_bdev_lvol.so.6.0 00:02:59.045 SO libspdk_bdev_virtio.so.6.0 00:02:59.045 SYMLINK libspdk_bdev_lvol.so 00:02:59.045 SYMLINK libspdk_bdev_virtio.so 00:02:59.306 LIB libspdk_bdev_raid.a 00:02:59.306 SO libspdk_bdev_raid.so.6.0 00:02:59.568 SYMLINK libspdk_bdev_raid.so 00:03:00.957 LIB libspdk_bdev_nvme.a 00:03:00.957 SO libspdk_bdev_nvme.so.7.1 00:03:00.957 SYMLINK libspdk_bdev_nvme.so 00:03:01.529 CC module/event/subsystems/iobuf/iobuf.o 00:03:01.529 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:01.529 CC module/event/subsystems/sock/sock.o 00:03:01.529 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:01.529 CC module/event/subsystems/scheduler/scheduler.o 00:03:01.529 CC module/event/subsystems/vmd/vmd.o 00:03:01.529 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:01.529 CC module/event/subsystems/fsdev/fsdev.o 00:03:01.529 CC module/event/subsystems/keyring/keyring.o 00:03:01.529 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:01.790 LIB libspdk_event_sock.a 00:03:01.790 LIB libspdk_event_iobuf.a 00:03:01.790 LIB libspdk_event_vhost_blk.a 00:03:01.790 SO libspdk_event_sock.so.5.0 00:03:01.790 LIB libspdk_event_keyring.a 00:03:01.790 LIB libspdk_event_vmd.a 00:03:01.790 LIB libspdk_event_fsdev.a 00:03:01.790 LIB libspdk_event_scheduler.a 00:03:01.790 LIB libspdk_event_vfu_tgt.a 00:03:01.790 SO libspdk_event_vhost_blk.so.3.0 00:03:01.790 SO libspdk_event_iobuf.so.3.0 00:03:01.790 SO libspdk_event_keyring.so.1.0 00:03:01.790 SO libspdk_event_fsdev.so.1.0 00:03:01.790 SO libspdk_event_scheduler.so.4.0 00:03:01.790 SO libspdk_event_vmd.so.6.0 00:03:01.790 SO libspdk_event_vfu_tgt.so.3.0 00:03:01.790 SYMLINK libspdk_event_sock.so 00:03:01.790 SYMLINK libspdk_event_vhost_blk.so 00:03:01.790 SYMLINK libspdk_event_keyring.so 00:03:01.790 SYMLINK libspdk_event_iobuf.so 00:03:01.790 SYMLINK libspdk_event_fsdev.so 00:03:01.790 SYMLINK libspdk_event_scheduler.so 00:03:01.790 SYMLINK libspdk_event_vfu_tgt.so 00:03:02.051 SYMLINK libspdk_event_vmd.so 00:03:02.311 CC module/event/subsystems/accel/accel.o 00:03:02.311 LIB libspdk_event_accel.a 00:03:02.572 SO libspdk_event_accel.so.6.0 00:03:02.572 SYMLINK libspdk_event_accel.so 00:03:02.832 CC module/event/subsystems/bdev/bdev.o 00:03:03.094 LIB libspdk_event_bdev.a 00:03:03.094 SO libspdk_event_bdev.so.6.0 00:03:03.094 SYMLINK libspdk_event_bdev.so 00:03:03.355 CC module/event/subsystems/scsi/scsi.o 00:03:03.617 CC module/event/subsystems/ublk/ublk.o 00:03:03.617 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:03.617 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:03.617 CC module/event/subsystems/nbd/nbd.o 00:03:03.617 LIB libspdk_event_scsi.a 00:03:03.617 LIB libspdk_event_ublk.a 00:03:03.617 LIB libspdk_event_nbd.a 00:03:03.617 SO libspdk_event_ublk.so.3.0 00:03:03.617 SO libspdk_event_scsi.so.6.0 00:03:03.617 SO libspdk_event_nbd.so.6.0 00:03:03.617 LIB libspdk_event_nvmf.a 00:03:03.878 SYMLINK libspdk_event_ublk.so 00:03:03.878 SYMLINK libspdk_event_scsi.so 00:03:03.878 SYMLINK libspdk_event_nbd.so 00:03:03.878 SO libspdk_event_nvmf.so.6.0 00:03:03.878 SYMLINK libspdk_event_nvmf.so 00:03:04.139 CC module/event/subsystems/iscsi/iscsi.o 00:03:04.139 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:04.399 LIB libspdk_event_vhost_scsi.a 00:03:04.399 LIB libspdk_event_iscsi.a 00:03:04.399 SO libspdk_event_vhost_scsi.so.3.0 00:03:04.399 SO libspdk_event_iscsi.so.6.0 00:03:04.399 SYMLINK libspdk_event_vhost_scsi.so 00:03:04.399 SYMLINK libspdk_event_iscsi.so 00:03:04.660 SO libspdk.so.6.0 00:03:04.660 SYMLINK libspdk.so 00:03:04.921 CC app/spdk_lspci/spdk_lspci.o 00:03:04.921 CXX app/trace/trace.o 00:03:04.921 CC app/trace_record/trace_record.o 00:03:04.921 TEST_HEADER include/spdk/accel.h 00:03:04.921 TEST_HEADER include/spdk/accel_module.h 00:03:04.921 TEST_HEADER include/spdk/assert.h 00:03:04.921 CC app/spdk_top/spdk_top.o 00:03:04.921 TEST_HEADER include/spdk/barrier.h 00:03:04.921 CC test/rpc_client/rpc_client_test.o 00:03:04.921 TEST_HEADER include/spdk/base64.h 00:03:04.921 CC app/spdk_nvme_identify/identify.o 00:03:04.921 CC app/spdk_nvme_discover/discovery_aer.o 00:03:04.921 TEST_HEADER include/spdk/bdev.h 00:03:04.921 TEST_HEADER include/spdk/bdev_module.h 00:03:04.921 CC app/spdk_nvme_perf/perf.o 00:03:04.921 TEST_HEADER include/spdk/bdev_zone.h 00:03:04.921 TEST_HEADER include/spdk/bit_array.h 00:03:04.921 TEST_HEADER include/spdk/bit_pool.h 00:03:04.921 TEST_HEADER include/spdk/blob_bdev.h 00:03:04.921 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:04.921 TEST_HEADER include/spdk/blobfs.h 00:03:04.921 TEST_HEADER include/spdk/blob.h 00:03:04.921 TEST_HEADER include/spdk/conf.h 00:03:04.921 CC app/nvmf_tgt/nvmf_main.o 00:03:04.921 TEST_HEADER include/spdk/config.h 00:03:04.921 TEST_HEADER include/spdk/cpuset.h 00:03:04.921 TEST_HEADER include/spdk/crc16.h 00:03:04.921 TEST_HEADER include/spdk/crc32.h 00:03:04.921 TEST_HEADER include/spdk/crc64.h 00:03:04.921 TEST_HEADER include/spdk/dif.h 00:03:04.921 TEST_HEADER include/spdk/dma.h 00:03:04.921 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:04.921 TEST_HEADER include/spdk/endian.h 00:03:04.921 TEST_HEADER include/spdk/env.h 00:03:04.921 TEST_HEADER include/spdk/env_dpdk.h 00:03:04.921 TEST_HEADER include/spdk/event.h 00:03:04.921 TEST_HEADER include/spdk/fd_group.h 00:03:04.921 TEST_HEADER include/spdk/fd.h 00:03:04.921 CC app/spdk_dd/spdk_dd.o 00:03:04.921 TEST_HEADER include/spdk/file.h 00:03:04.921 TEST_HEADER include/spdk/fsdev_module.h 00:03:04.921 TEST_HEADER include/spdk/fsdev.h 00:03:04.921 CC app/iscsi_tgt/iscsi_tgt.o 00:03:04.922 TEST_HEADER include/spdk/ftl.h 00:03:04.922 TEST_HEADER include/spdk/gpt_spec.h 00:03:05.180 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:05.180 TEST_HEADER include/spdk/hexlify.h 00:03:05.180 TEST_HEADER include/spdk/histogram_data.h 00:03:05.180 TEST_HEADER include/spdk/idxd.h 00:03:05.180 TEST_HEADER include/spdk/idxd_spec.h 00:03:05.180 TEST_HEADER include/spdk/init.h 00:03:05.180 TEST_HEADER include/spdk/ioat.h 00:03:05.180 TEST_HEADER include/spdk/json.h 00:03:05.180 TEST_HEADER include/spdk/ioat_spec.h 00:03:05.180 TEST_HEADER include/spdk/iscsi_spec.h 00:03:05.180 TEST_HEADER include/spdk/jsonrpc.h 00:03:05.180 TEST_HEADER include/spdk/keyring.h 00:03:05.180 TEST_HEADER include/spdk/keyring_module.h 00:03:05.180 TEST_HEADER include/spdk/likely.h 00:03:05.180 TEST_HEADER include/spdk/log.h 00:03:05.180 TEST_HEADER include/spdk/lvol.h 00:03:05.180 TEST_HEADER include/spdk/md5.h 00:03:05.180 TEST_HEADER include/spdk/memory.h 00:03:05.180 TEST_HEADER include/spdk/mmio.h 00:03:05.180 TEST_HEADER include/spdk/nbd.h 00:03:05.180 TEST_HEADER include/spdk/net.h 00:03:05.180 TEST_HEADER include/spdk/notify.h 00:03:05.180 TEST_HEADER include/spdk/nvme.h 00:03:05.180 TEST_HEADER include/spdk/nvme_intel.h 00:03:05.180 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:05.180 CC app/spdk_tgt/spdk_tgt.o 00:03:05.180 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:05.180 TEST_HEADER include/spdk/nvme_spec.h 00:03:05.180 TEST_HEADER include/spdk/nvme_zns.h 00:03:05.180 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:05.180 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:05.180 TEST_HEADER include/spdk/nvmf.h 00:03:05.180 TEST_HEADER include/spdk/nvmf_spec.h 00:03:05.180 TEST_HEADER include/spdk/nvmf_transport.h 00:03:05.180 TEST_HEADER include/spdk/opal.h 00:03:05.181 TEST_HEADER include/spdk/opal_spec.h 00:03:05.181 TEST_HEADER include/spdk/pci_ids.h 00:03:05.181 TEST_HEADER include/spdk/pipe.h 00:03:05.181 TEST_HEADER include/spdk/queue.h 00:03:05.181 TEST_HEADER include/spdk/reduce.h 00:03:05.181 TEST_HEADER include/spdk/rpc.h 00:03:05.181 TEST_HEADER include/spdk/scsi.h 00:03:05.181 TEST_HEADER include/spdk/scheduler.h 00:03:05.181 TEST_HEADER include/spdk/scsi_spec.h 00:03:05.181 TEST_HEADER include/spdk/stdinc.h 00:03:05.181 TEST_HEADER include/spdk/sock.h 00:03:05.181 TEST_HEADER include/spdk/string.h 00:03:05.181 TEST_HEADER include/spdk/thread.h 00:03:05.181 TEST_HEADER include/spdk/trace.h 00:03:05.181 TEST_HEADER include/spdk/trace_parser.h 00:03:05.181 TEST_HEADER include/spdk/ublk.h 00:03:05.181 TEST_HEADER include/spdk/tree.h 00:03:05.181 TEST_HEADER include/spdk/uuid.h 00:03:05.181 TEST_HEADER include/spdk/util.h 00:03:05.181 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:05.181 TEST_HEADER include/spdk/version.h 00:03:05.181 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:05.181 TEST_HEADER include/spdk/vhost.h 00:03:05.181 TEST_HEADER include/spdk/vmd.h 00:03:05.181 TEST_HEADER include/spdk/xor.h 00:03:05.181 TEST_HEADER include/spdk/zipf.h 00:03:05.181 CXX test/cpp_headers/accel.o 00:03:05.181 CXX test/cpp_headers/accel_module.o 00:03:05.181 CXX test/cpp_headers/assert.o 00:03:05.181 CXX test/cpp_headers/barrier.o 00:03:05.181 CXX test/cpp_headers/base64.o 00:03:05.181 CXX test/cpp_headers/bdev.o 00:03:05.181 CXX test/cpp_headers/bdev_module.o 00:03:05.181 CXX test/cpp_headers/bdev_zone.o 00:03:05.181 CXX test/cpp_headers/blob_bdev.o 00:03:05.181 CXX test/cpp_headers/bit_array.o 00:03:05.181 CXX test/cpp_headers/bit_pool.o 00:03:05.181 CXX test/cpp_headers/blobfs_bdev.o 00:03:05.181 CXX test/cpp_headers/blobfs.o 00:03:05.181 CXX test/cpp_headers/conf.o 00:03:05.181 CXX test/cpp_headers/blob.o 00:03:05.181 CXX test/cpp_headers/config.o 00:03:05.181 CXX test/cpp_headers/cpuset.o 00:03:05.181 CXX test/cpp_headers/crc16.o 00:03:05.181 CXX test/cpp_headers/crc64.o 00:03:05.181 CXX test/cpp_headers/crc32.o 00:03:05.181 CXX test/cpp_headers/dif.o 00:03:05.181 CXX test/cpp_headers/dma.o 00:03:05.181 CXX test/cpp_headers/env_dpdk.o 00:03:05.181 CXX test/cpp_headers/endian.o 00:03:05.181 CXX test/cpp_headers/env.o 00:03:05.181 CXX test/cpp_headers/event.o 00:03:05.181 CXX test/cpp_headers/fd_group.o 00:03:05.181 CXX test/cpp_headers/fd.o 00:03:05.181 CXX test/cpp_headers/file.o 00:03:05.181 CXX test/cpp_headers/ftl.o 00:03:05.181 CXX test/cpp_headers/fsdev_module.o 00:03:05.181 CXX test/cpp_headers/fsdev.o 00:03:05.181 CXX test/cpp_headers/gpt_spec.o 00:03:05.181 CXX test/cpp_headers/fuse_dispatcher.o 00:03:05.181 CXX test/cpp_headers/hexlify.o 00:03:05.181 CXX test/cpp_headers/histogram_data.o 00:03:05.181 CXX test/cpp_headers/idxd_spec.o 00:03:05.181 CXX test/cpp_headers/idxd.o 00:03:05.181 CXX test/cpp_headers/ioat.o 00:03:05.181 CXX test/cpp_headers/ioat_spec.o 00:03:05.181 CXX test/cpp_headers/init.o 00:03:05.181 CXX test/cpp_headers/iscsi_spec.o 00:03:05.181 CXX test/cpp_headers/json.o 00:03:05.181 CXX test/cpp_headers/keyring.o 00:03:05.181 CXX test/cpp_headers/likely.o 00:03:05.181 CXX test/cpp_headers/jsonrpc.o 00:03:05.181 CXX test/cpp_headers/log.o 00:03:05.181 CXX test/cpp_headers/keyring_module.o 00:03:05.181 CXX test/cpp_headers/md5.o 00:03:05.181 CC examples/ioat/perf/perf.o 00:03:05.181 CXX test/cpp_headers/lvol.o 00:03:05.181 CXX test/cpp_headers/net.o 00:03:05.181 CXX test/cpp_headers/memory.o 00:03:05.181 CXX test/cpp_headers/notify.o 00:03:05.181 CXX test/cpp_headers/mmio.o 00:03:05.181 CXX test/cpp_headers/nvme.o 00:03:05.181 CXX test/cpp_headers/nbd.o 00:03:05.181 CXX test/cpp_headers/nvme_intel.o 00:03:05.181 CXX test/cpp_headers/nvme_ocssd.o 00:03:05.181 CXX test/cpp_headers/nvme_zns.o 00:03:05.181 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:05.181 CXX test/cpp_headers/nvme_spec.o 00:03:05.181 CC examples/ioat/verify/verify.o 00:03:05.181 CXX test/cpp_headers/nvmf_cmd.o 00:03:05.181 CXX test/cpp_headers/nvmf.o 00:03:05.181 CC examples/util/zipf/zipf.o 00:03:05.181 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:05.181 CXX test/cpp_headers/nvmf_spec.o 00:03:05.181 CXX test/cpp_headers/nvmf_transport.o 00:03:05.181 CXX test/cpp_headers/opal_spec.o 00:03:05.181 CXX test/cpp_headers/opal.o 00:03:05.181 CXX test/cpp_headers/queue.o 00:03:05.181 CXX test/cpp_headers/pci_ids.o 00:03:05.181 CXX test/cpp_headers/pipe.o 00:03:05.181 CXX test/cpp_headers/reduce.o 00:03:05.181 CXX test/cpp_headers/scheduler.o 00:03:05.181 CXX test/cpp_headers/scsi.o 00:03:05.181 CXX test/cpp_headers/rpc.o 00:03:05.181 LINK spdk_lspci 00:03:05.181 CXX test/cpp_headers/scsi_spec.o 00:03:05.181 CXX test/cpp_headers/sock.o 00:03:05.181 CXX test/cpp_headers/stdinc.o 00:03:05.181 CC app/fio/nvme/fio_plugin.o 00:03:05.181 CXX test/cpp_headers/string.o 00:03:05.181 CXX test/cpp_headers/trace.o 00:03:05.181 CC test/app/jsoncat/jsoncat.o 00:03:05.181 CXX test/cpp_headers/thread.o 00:03:05.181 CXX test/cpp_headers/tree.o 00:03:05.181 CXX test/cpp_headers/trace_parser.o 00:03:05.181 CC test/app/stub/stub.o 00:03:05.181 CC test/env/vtophys/vtophys.o 00:03:05.181 CXX test/cpp_headers/uuid.o 00:03:05.181 CXX test/cpp_headers/ublk.o 00:03:05.181 CXX test/cpp_headers/util.o 00:03:05.181 CXX test/cpp_headers/version.o 00:03:05.181 CXX test/cpp_headers/vfio_user_spec.o 00:03:05.181 CXX test/cpp_headers/vfio_user_pci.o 00:03:05.181 CXX test/cpp_headers/vhost.o 00:03:05.181 CXX test/cpp_headers/vmd.o 00:03:05.181 CXX test/cpp_headers/xor.o 00:03:05.181 CXX test/cpp_headers/zipf.o 00:03:05.181 CC test/app/histogram_perf/histogram_perf.o 00:03:05.181 CC test/env/pci/pci_ut.o 00:03:05.181 CC test/env/memory/memory_ut.o 00:03:05.181 CC test/thread/poller_perf/poller_perf.o 00:03:05.443 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:05.443 CC test/app/bdev_svc/bdev_svc.o 00:03:05.443 CC test/dma/test_dma/test_dma.o 00:03:05.444 LINK rpc_client_test 00:03:05.444 CC app/fio/bdev/fio_plugin.o 00:03:05.444 LINK interrupt_tgt 00:03:05.444 LINK spdk_nvme_discover 00:03:05.444 LINK spdk_trace_record 00:03:05.444 LINK nvmf_tgt 00:03:05.444 LINK iscsi_tgt 00:03:05.702 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:05.702 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:05.702 CC test/env/mem_callbacks/mem_callbacks.o 00:03:05.702 LINK spdk_tgt 00:03:05.702 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:05.702 LINK spdk_dd 00:03:05.702 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:05.702 LINK vtophys 00:03:05.702 LINK env_dpdk_post_init 00:03:05.961 LINK jsoncat 00:03:05.961 LINK histogram_perf 00:03:05.961 LINK zipf 00:03:05.961 LINK ioat_perf 00:03:05.961 LINK stub 00:03:05.961 LINK poller_perf 00:03:05.961 LINK spdk_trace 00:03:05.961 LINK bdev_svc 00:03:05.961 LINK verify 00:03:06.221 LINK spdk_nvme_perf 00:03:06.221 LINK nvme_fuzz 00:03:06.221 LINK vhost_fuzz 00:03:06.221 LINK pci_ut 00:03:06.221 LINK test_dma 00:03:06.221 LINK spdk_nvme 00:03:06.221 LINK spdk_bdev 00:03:06.482 CC app/vhost/vhost.o 00:03:06.482 CC examples/idxd/perf/perf.o 00:03:06.482 LINK mem_callbacks 00:03:06.482 CC examples/sock/hello_world/hello_sock.o 00:03:06.482 LINK spdk_top 00:03:06.482 LINK spdk_nvme_identify 00:03:06.482 CC examples/vmd/lsvmd/lsvmd.o 00:03:06.482 CC examples/vmd/led/led.o 00:03:06.482 CC test/event/reactor/reactor.o 00:03:06.482 CC test/event/reactor_perf/reactor_perf.o 00:03:06.482 CC test/event/event_perf/event_perf.o 00:03:06.482 CC examples/thread/thread/thread_ex.o 00:03:06.482 CC test/event/app_repeat/app_repeat.o 00:03:06.482 CC test/event/scheduler/scheduler.o 00:03:06.482 LINK lsvmd 00:03:06.482 LINK vhost 00:03:06.482 LINK reactor_perf 00:03:06.482 LINK event_perf 00:03:06.482 LINK reactor 00:03:06.742 LINK led 00:03:06.742 LINK app_repeat 00:03:06.742 LINK hello_sock 00:03:06.742 LINK thread 00:03:06.742 LINK idxd_perf 00:03:06.742 LINK scheduler 00:03:06.742 CC test/nvme/fdp/fdp.o 00:03:06.742 CC test/nvme/compliance/nvme_compliance.o 00:03:06.742 CC test/nvme/overhead/overhead.o 00:03:06.742 CC test/nvme/sgl/sgl.o 00:03:06.742 CC test/nvme/startup/startup.o 00:03:06.742 CC test/nvme/e2edp/nvme_dp.o 00:03:06.742 CC test/nvme/err_injection/err_injection.o 00:03:06.742 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.742 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.742 CC test/nvme/connect_stress/connect_stress.o 00:03:06.742 CC test/nvme/aer/aer.o 00:03:06.742 CC test/nvme/simple_copy/simple_copy.o 00:03:06.742 CC test/nvme/cuse/cuse.o 00:03:06.742 CC test/nvme/reset/reset.o 00:03:06.742 CC test/nvme/reserve/reserve.o 00:03:06.742 CC test/nvme/boot_partition/boot_partition.o 00:03:06.742 CC test/blobfs/mkfs/mkfs.o 00:03:06.742 CC test/accel/dif/dif.o 00:03:07.003 LINK memory_ut 00:03:07.003 CC test/lvol/esnap/esnap.o 00:03:07.003 LINK boot_partition 00:03:07.003 LINK startup 00:03:07.003 LINK doorbell_aers 00:03:07.003 LINK err_injection 00:03:07.003 LINK fused_ordering 00:03:07.003 LINK connect_stress 00:03:07.003 LINK reserve 00:03:07.003 LINK mkfs 00:03:07.003 LINK simple_copy 00:03:07.003 LINK sgl 00:03:07.003 LINK nvme_dp 00:03:07.003 LINK aer 00:03:07.003 LINK overhead 00:03:07.003 LINK reset 00:03:07.003 LINK nvme_compliance 00:03:07.263 LINK fdp 00:03:07.263 CC examples/nvme/hotplug/hotplug.o 00:03:07.263 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:07.263 CC examples/nvme/reconnect/reconnect.o 00:03:07.264 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:07.264 CC examples/nvme/arbitration/arbitration.o 00:03:07.264 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:07.264 CC examples/nvme/abort/abort.o 00:03:07.264 CC examples/nvme/hello_world/hello_world.o 00:03:07.264 CC examples/accel/perf/accel_perf.o 00:03:07.264 CC examples/blob/cli/blobcli.o 00:03:07.264 LINK iscsi_fuzz 00:03:07.264 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:07.264 CC examples/blob/hello_world/hello_blob.o 00:03:07.264 LINK pmr_persistence 00:03:07.525 LINK hotplug 00:03:07.525 LINK cmb_copy 00:03:07.525 LINK hello_world 00:03:07.525 LINK dif 00:03:07.525 LINK reconnect 00:03:07.525 LINK arbitration 00:03:07.525 LINK abort 00:03:07.525 LINK hello_fsdev 00:03:07.525 LINK hello_blob 00:03:07.525 LINK nvme_manage 00:03:07.785 LINK accel_perf 00:03:07.785 LINK blobcli 00:03:08.046 LINK cuse 00:03:08.046 CC test/bdev/bdevio/bdevio.o 00:03:08.306 CC examples/bdev/hello_world/hello_bdev.o 00:03:08.306 CC examples/bdev/bdevperf/bdevperf.o 00:03:08.307 LINK bdevio 00:03:08.567 LINK hello_bdev 00:03:09.158 LINK bdevperf 00:03:09.729 CC examples/nvmf/nvmf/nvmf.o 00:03:09.990 LINK nvmf 00:03:10.932 LINK esnap 00:03:11.193 00:03:11.193 real 0m54.283s 00:03:11.193 user 7m50.456s 00:03:11.193 sys 4m25.651s 00:03:11.193 12:37:51 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:11.193 12:37:51 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.193 ************************************ 00:03:11.193 END TEST make 00:03:11.193 ************************************ 00:03:11.193 12:37:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.193 12:37:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.193 12:37:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.193 12:37:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.193 12:37:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.193 12:37:51 -- pm/common@44 -- $ pid=288730 00:03:11.193 12:37:51 -- pm/common@50 -- $ kill -TERM 288730 00:03:11.193 12:37:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.193 12:37:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.193 12:37:51 -- pm/common@44 -- $ pid=288731 00:03:11.193 12:37:51 -- pm/common@50 -- $ kill -TERM 288731 00:03:11.194 12:37:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.194 12:37:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:11.194 12:37:51 -- pm/common@44 -- $ pid=288733 00:03:11.194 12:37:51 -- pm/common@50 -- $ kill -TERM 288733 00:03:11.194 12:37:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.194 12:37:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:11.194 12:37:51 -- pm/common@44 -- $ pid=288757 00:03:11.194 12:37:51 -- pm/common@50 -- $ sudo -E kill -TERM 288757 00:03:11.455 12:37:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:11.455 12:37:51 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:11.455 12:37:51 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:11.455 12:37:51 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:11.455 12:37:51 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:11.455 12:37:51 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:11.455 12:37:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.455 12:37:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.455 12:37:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.455 12:37:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.455 12:37:51 -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.455 12:37:51 -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.455 12:37:51 -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.455 12:37:51 -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.455 12:37:51 -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.455 12:37:51 -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.455 12:37:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.455 12:37:51 -- scripts/common.sh@344 -- # case "$op" in 00:03:11.455 12:37:51 -- scripts/common.sh@345 -- # : 1 00:03:11.455 12:37:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.455 12:37:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.455 12:37:51 -- scripts/common.sh@365 -- # decimal 1 00:03:11.455 12:37:51 -- scripts/common.sh@353 -- # local d=1 00:03:11.455 12:37:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.455 12:37:51 -- scripts/common.sh@355 -- # echo 1 00:03:11.455 12:37:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.455 12:37:51 -- scripts/common.sh@366 -- # decimal 2 00:03:11.455 12:37:51 -- scripts/common.sh@353 -- # local d=2 00:03:11.455 12:37:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.455 12:37:51 -- scripts/common.sh@355 -- # echo 2 00:03:11.455 12:37:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.455 12:37:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.455 12:37:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.455 12:37:51 -- scripts/common.sh@368 -- # return 0 00:03:11.455 12:37:51 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.455 12:37:51 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:11.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.455 --rc genhtml_branch_coverage=1 00:03:11.455 --rc genhtml_function_coverage=1 00:03:11.455 --rc genhtml_legend=1 00:03:11.455 --rc geninfo_all_blocks=1 00:03:11.455 --rc geninfo_unexecuted_blocks=1 00:03:11.455 00:03:11.455 ' 00:03:11.456 12:37:51 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:11.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.456 --rc genhtml_branch_coverage=1 00:03:11.456 --rc genhtml_function_coverage=1 00:03:11.456 --rc genhtml_legend=1 00:03:11.456 --rc geninfo_all_blocks=1 00:03:11.456 --rc geninfo_unexecuted_blocks=1 00:03:11.456 00:03:11.456 ' 00:03:11.456 12:37:51 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:11.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.456 --rc genhtml_branch_coverage=1 00:03:11.456 --rc genhtml_function_coverage=1 00:03:11.456 --rc genhtml_legend=1 00:03:11.456 --rc geninfo_all_blocks=1 00:03:11.456 --rc geninfo_unexecuted_blocks=1 00:03:11.456 00:03:11.456 ' 00:03:11.456 12:37:51 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:11.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.456 --rc genhtml_branch_coverage=1 00:03:11.456 --rc genhtml_function_coverage=1 00:03:11.456 --rc genhtml_legend=1 00:03:11.456 --rc geninfo_all_blocks=1 00:03:11.456 --rc geninfo_unexecuted_blocks=1 00:03:11.456 00:03:11.456 ' 00:03:11.456 12:37:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:11.456 12:37:51 -- nvmf/common.sh@7 -- # uname -s 00:03:11.456 12:37:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.456 12:37:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.456 12:37:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.456 12:37:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.456 12:37:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.456 12:37:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.456 12:37:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.456 12:37:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.456 12:37:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.456 12:37:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.456 12:37:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:11.456 12:37:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:11.456 12:37:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.456 12:37:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.456 12:37:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:11.456 12:37:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.456 12:37:51 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:11.456 12:37:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:11.456 12:37:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.456 12:37:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.456 12:37:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.456 12:37:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.456 12:37:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.456 12:37:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.456 12:37:51 -- paths/export.sh@5 -- # export PATH 00:03:11.456 12:37:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.456 12:37:51 -- nvmf/common.sh@51 -- # : 0 00:03:11.456 12:37:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:11.456 12:37:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:11.456 12:37:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.456 12:37:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.456 12:37:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.456 12:37:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:11.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:11.456 12:37:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:11.456 12:37:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:11.456 12:37:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:11.456 12:37:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.456 12:37:51 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.456 12:37:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.456 12:37:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.456 12:37:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:11.456 12:37:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.456 12:37:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:11.456 12:37:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.718 12:37:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.718 12:37:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.718 12:37:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.718 12:37:51 -- spdk/autotest.sh@48 -- # udevadm_pid=353940 00:03:11.718 12:37:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.718 12:37:51 -- pm/common@17 -- # local monitor 00:03:11.718 12:37:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.718 12:37:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.718 12:37:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.718 12:37:51 -- pm/common@21 -- # date +%s 00:03:11.718 12:37:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.718 12:37:51 -- pm/common@21 -- # date +%s 00:03:11.718 12:37:51 -- pm/common@25 -- # sleep 1 00:03:11.718 12:37:51 -- pm/common@21 -- # date +%s 00:03:11.718 12:37:51 -- pm/common@21 -- # date +%s 00:03:11.718 12:37:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732534671 00:03:11.718 12:37:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732534671 00:03:11.718 12:37:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732534671 00:03:11.718 12:37:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732534671 00:03:11.718 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732534671_collect-cpu-load.pm.log 00:03:11.718 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732534671_collect-cpu-temp.pm.log 00:03:11.718 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732534671_collect-vmstat.pm.log 00:03:11.718 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732534671_collect-bmc-pm.bmc.pm.log 00:03:12.660 12:37:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.660 12:37:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.660 12:37:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:12.660 12:37:52 -- common/autotest_common.sh@10 -- # set +x 00:03:12.660 12:37:52 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.660 12:37:52 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:12.660 12:37:52 -- common/autotest_common.sh@10 -- # set +x 00:03:12.660 12:37:52 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:12.660 12:37:52 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.660 12:37:52 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.660 12:37:52 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:12.660 12:37:52 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.660 12:37:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.660 12:37:52 -- common/autotest_common.sh@1457 -- # uname 00:03:12.660 12:37:52 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:12.660 12:37:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.660 12:37:52 -- common/autotest_common.sh@1477 -- # uname 00:03:12.660 12:37:52 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:12.660 12:37:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:12.660 12:37:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:12.660 lcov: LCOV version 1.15 00:03:12.660 12:37:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:27.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.574 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:42.618 12:38:22 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:42.618 12:38:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.618 12:38:22 -- common/autotest_common.sh@10 -- # set +x 00:03:42.618 12:38:22 -- spdk/autotest.sh@78 -- # rm -f 00:03:42.618 12:38:22 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.826 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:46.826 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:46.826 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:46.826 12:38:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:46.826 12:38:26 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:46.826 12:38:26 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:46.826 12:38:26 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:46.826 12:38:26 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:46.826 12:38:26 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:46.826 12:38:26 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:46.826 12:38:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:46.826 12:38:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:46.826 12:38:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:46.826 12:38:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:46.826 12:38:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:46.826 12:38:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:46.826 12:38:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:46.826 12:38:26 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:47.088 No valid GPT data, bailing 00:03:47.088 12:38:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.088 12:38:26 -- scripts/common.sh@394 -- # pt= 00:03:47.088 12:38:26 -- scripts/common.sh@395 -- # return 1 00:03:47.088 12:38:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.088 1+0 records in 00:03:47.088 1+0 records out 00:03:47.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0053078 s, 198 MB/s 00:03:47.088 12:38:26 -- spdk/autotest.sh@105 -- # sync 00:03:47.088 12:38:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:47.088 12:38:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:47.088 12:38:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:57.088 12:38:35 -- spdk/autotest.sh@111 -- # uname -s 00:03:57.088 12:38:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:57.088 12:38:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:57.088 12:38:35 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:59.672 Hugepages 00:03:59.672 node hugesize free / total 00:03:59.672 node0 1048576kB 0 / 0 00:03:59.672 node0 2048kB 0 / 0 00:03:59.672 node1 1048576kB 0 / 0 00:03:59.672 node1 2048kB 0 / 0 00:03:59.672 00:03:59.672 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:59.672 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:59.672 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:59.672 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:59.672 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:59.672 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:59.672 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:59.672 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:59.672 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:59.672 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:59.672 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:59.672 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:59.672 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:59.672 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:59.672 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:59.672 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:59.672 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:59.672 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:59.672 12:38:39 -- spdk/autotest.sh@117 -- # uname -s 00:03:59.672 12:38:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:59.672 12:38:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:59.672 12:38:39 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.877 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:03.877 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:05.260 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:05.521 12:38:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:06.462 12:38:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:06.462 12:38:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:06.462 12:38:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.462 12:38:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:06.462 12:38:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.462 12:38:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.462 12:38:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.462 12:38:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.462 12:38:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.462 12:38:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:06.462 12:38:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:06.722 12:38:46 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.930 Waiting for block devices as requested 00:04:10.930 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:10.930 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:10.930 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:10.930 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:10.930 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:10.930 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:10.930 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:10.930 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:11.192 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:11.192 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:11.192 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:11.453 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:11.453 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:11.453 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:11.714 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:11.714 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:11.714 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:11.975 12:38:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:11.975 12:38:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:11.975 12:38:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:11.975 12:38:51 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:11.975 12:38:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:11.975 12:38:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:11.975 12:38:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:11.975 12:38:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:11.975 12:38:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:11.975 12:38:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:11.975 12:38:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:11.975 12:38:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:11.975 12:38:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:11.975 12:38:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:11.975 12:38:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:11.975 12:38:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:11.975 12:38:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:11.975 12:38:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:11.975 12:38:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:11.975 12:38:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:11.975 12:38:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:11.975 12:38:51 -- common/autotest_common.sh@1543 -- # continue 00:04:11.975 12:38:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:11.975 12:38:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.975 12:38:51 -- common/autotest_common.sh@10 -- # set +x 00:04:12.236 12:38:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:12.236 12:38:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.236 12:38:51 -- common/autotest_common.sh@10 -- # set +x 00:04:12.236 12:38:51 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.444 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:16.444 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:16.705 12:38:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:16.705 12:38:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.705 12:38:56 -- common/autotest_common.sh@10 -- # set +x 00:04:16.705 12:38:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:16.705 12:38:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:16.705 12:38:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:16.705 12:38:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:16.705 12:38:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:16.705 12:38:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:16.705 12:38:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:16.705 12:38:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:16.705 12:38:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:16.705 12:38:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:16.705 12:38:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.705 12:38:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:16.705 12:38:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:16.705 12:38:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:16.705 12:38:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:16.705 12:38:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:16.705 12:38:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:16.705 12:38:56 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:16.705 12:38:56 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:16.705 12:38:56 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:16.705 12:38:56 -- common/autotest_common.sh@1572 -- # return 0 00:04:16.705 12:38:56 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:16.705 12:38:56 -- common/autotest_common.sh@1580 -- # return 0 00:04:16.705 12:38:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:16.705 12:38:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:16.705 12:38:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:16.705 12:38:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:16.705 12:38:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:16.705 12:38:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.705 12:38:56 -- common/autotest_common.sh@10 -- # set +x 00:04:16.705 12:38:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:16.705 12:38:56 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:16.705 12:38:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.705 12:38:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.705 12:38:56 -- common/autotest_common.sh@10 -- # set +x 00:04:16.705 ************************************ 00:04:16.705 START TEST env 00:04:16.705 ************************************ 00:04:16.705 12:38:56 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:16.965 * Looking for test storage... 00:04:16.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:16.965 12:38:56 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:16.965 12:38:56 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:16.965 12:38:56 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:16.965 12:38:56 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:16.965 12:38:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.965 12:38:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.965 12:38:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.965 12:38:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.965 12:38:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.965 12:38:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.965 12:38:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.965 12:38:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.965 12:38:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.965 12:38:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.965 12:38:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.965 12:38:56 env -- scripts/common.sh@344 -- # case "$op" in 00:04:16.965 12:38:56 env -- scripts/common.sh@345 -- # : 1 00:04:16.965 12:38:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.965 12:38:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.965 12:38:56 env -- scripts/common.sh@365 -- # decimal 1 00:04:16.965 12:38:56 env -- scripts/common.sh@353 -- # local d=1 00:04:16.965 12:38:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.965 12:38:56 env -- scripts/common.sh@355 -- # echo 1 00:04:16.965 12:38:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.965 12:38:56 env -- scripts/common.sh@366 -- # decimal 2 00:04:16.965 12:38:56 env -- scripts/common.sh@353 -- # local d=2 00:04:16.965 12:38:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.965 12:38:56 env -- scripts/common.sh@355 -- # echo 2 00:04:16.965 12:38:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.965 12:38:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.965 12:38:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.965 12:38:56 env -- scripts/common.sh@368 -- # return 0 00:04:16.965 12:38:56 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.965 12:38:56 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:16.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.965 --rc genhtml_branch_coverage=1 00:04:16.965 --rc genhtml_function_coverage=1 00:04:16.965 --rc genhtml_legend=1 00:04:16.965 --rc geninfo_all_blocks=1 00:04:16.965 --rc geninfo_unexecuted_blocks=1 00:04:16.965 00:04:16.965 ' 00:04:16.965 12:38:56 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:16.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.965 --rc genhtml_branch_coverage=1 00:04:16.965 --rc genhtml_function_coverage=1 00:04:16.965 --rc genhtml_legend=1 00:04:16.965 --rc geninfo_all_blocks=1 00:04:16.965 --rc geninfo_unexecuted_blocks=1 00:04:16.965 00:04:16.965 ' 00:04:16.965 12:38:56 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:16.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.965 --rc genhtml_branch_coverage=1 00:04:16.965 --rc genhtml_function_coverage=1 00:04:16.965 --rc genhtml_legend=1 00:04:16.965 --rc geninfo_all_blocks=1 00:04:16.965 --rc geninfo_unexecuted_blocks=1 00:04:16.965 00:04:16.965 ' 00:04:16.965 12:38:56 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:16.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.965 --rc genhtml_branch_coverage=1 00:04:16.965 --rc genhtml_function_coverage=1 00:04:16.965 --rc genhtml_legend=1 00:04:16.965 --rc geninfo_all_blocks=1 00:04:16.965 --rc geninfo_unexecuted_blocks=1 00:04:16.965 00:04:16.965 ' 00:04:16.965 12:38:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:16.965 12:38:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.965 12:38:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.965 12:38:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.965 ************************************ 00:04:16.965 START TEST env_memory 00:04:16.965 ************************************ 00:04:16.965 12:38:56 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:16.965 00:04:16.965 00:04:16.965 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.965 http://cunit.sourceforge.net/ 00:04:16.965 00:04:16.965 00:04:16.965 Suite: memory 00:04:17.226 Test: alloc and free memory map ...[2024-11-25 12:38:56.871736] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:17.226 passed 00:04:17.226 Test: mem map translation ...[2024-11-25 12:38:56.897119] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:17.226 [2024-11-25 12:38:56.897138] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:17.226 [2024-11-25 12:38:56.897183] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:17.226 [2024-11-25 12:38:56.897190] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:17.226 passed 00:04:17.226 Test: mem map registration ...[2024-11-25 12:38:56.952269] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:17.226 [2024-11-25 12:38:56.952284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:17.226 passed 00:04:17.226 Test: mem map adjacent registrations ...passed 00:04:17.226 00:04:17.226 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.226 suites 1 1 n/a 0 0 00:04:17.226 tests 4 4 4 0 0 00:04:17.226 asserts 152 152 152 0 n/a 00:04:17.226 00:04:17.226 Elapsed time = 0.193 seconds 00:04:17.226 00:04:17.226 real 0m0.208s 00:04:17.226 user 0m0.197s 00:04:17.226 sys 0m0.010s 00:04:17.226 12:38:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.226 12:38:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:17.226 ************************************ 00:04:17.226 END TEST env_memory 00:04:17.226 ************************************ 00:04:17.226 12:38:57 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:17.226 12:38:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.226 12:38:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.226 12:38:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.226 ************************************ 00:04:17.226 START TEST env_vtophys 00:04:17.226 ************************************ 00:04:17.226 12:38:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:17.488 EAL: lib.eal log level changed from notice to debug 00:04:17.488 EAL: Detected lcore 0 as core 0 on socket 0 00:04:17.488 EAL: Detected lcore 1 as core 1 on socket 0 00:04:17.488 EAL: Detected lcore 2 as core 2 on socket 0 00:04:17.488 EAL: Detected lcore 3 as core 3 on socket 0 00:04:17.488 EAL: Detected lcore 4 as core 4 on socket 0 00:04:17.488 EAL: Detected lcore 5 as core 5 on socket 0 00:04:17.488 EAL: Detected lcore 6 as core 6 on socket 0 00:04:17.488 EAL: Detected lcore 7 as core 7 on socket 0 00:04:17.488 EAL: Detected lcore 8 as core 8 on socket 0 00:04:17.488 EAL: Detected lcore 9 as core 9 on socket 0 00:04:17.488 EAL: Detected lcore 10 as core 10 on socket 0 00:04:17.488 EAL: Detected lcore 11 as core 11 on socket 0 00:04:17.488 EAL: Detected lcore 12 as core 12 on socket 0 00:04:17.488 EAL: Detected lcore 13 as core 13 on socket 0 00:04:17.488 EAL: Detected lcore 14 as core 14 on socket 0 00:04:17.488 EAL: Detected lcore 15 as core 15 on socket 0 00:04:17.488 EAL: Detected lcore 16 as core 16 on socket 0 00:04:17.488 EAL: Detected lcore 17 as core 17 on socket 0 00:04:17.488 EAL: Detected lcore 18 as core 18 on socket 0 00:04:17.488 EAL: Detected lcore 19 as core 19 on socket 0 00:04:17.488 EAL: Detected lcore 20 as core 20 on socket 0 00:04:17.488 EAL: Detected lcore 21 as core 21 on socket 0 00:04:17.488 EAL: Detected lcore 22 as core 22 on socket 0 00:04:17.488 EAL: Detected lcore 23 as core 23 on socket 0 00:04:17.488 EAL: Detected lcore 24 as core 24 on socket 0 00:04:17.488 EAL: Detected lcore 25 as core 25 on socket 0 00:04:17.488 EAL: Detected lcore 26 as core 26 on socket 0 00:04:17.488 EAL: Detected lcore 27 as core 27 on socket 0 00:04:17.488 EAL: Detected lcore 28 as core 28 on socket 0 00:04:17.488 EAL: Detected lcore 29 as core 29 on socket 0 00:04:17.488 EAL: Detected lcore 30 as core 30 on socket 0 00:04:17.488 EAL: Detected lcore 31 as core 31 on socket 0 00:04:17.488 EAL: Detected lcore 32 as core 32 on socket 0 00:04:17.488 EAL: Detected lcore 33 as core 33 on socket 0 00:04:17.488 EAL: Detected lcore 34 as core 34 on socket 0 00:04:17.488 EAL: Detected lcore 35 as core 35 on socket 0 00:04:17.488 EAL: Detected lcore 36 as core 0 on socket 1 00:04:17.488 EAL: Detected lcore 37 as core 1 on socket 1 00:04:17.488 EAL: Detected lcore 38 as core 2 on socket 1 00:04:17.488 EAL: Detected lcore 39 as core 3 on socket 1 00:04:17.488 EAL: Detected lcore 40 as core 4 on socket 1 00:04:17.488 EAL: Detected lcore 41 as core 5 on socket 1 00:04:17.488 EAL: Detected lcore 42 as core 6 on socket 1 00:04:17.488 EAL: Detected lcore 43 as core 7 on socket 1 00:04:17.488 EAL: Detected lcore 44 as core 8 on socket 1 00:04:17.488 EAL: Detected lcore 45 as core 9 on socket 1 00:04:17.488 EAL: Detected lcore 46 as core 10 on socket 1 00:04:17.488 EAL: Detected lcore 47 as core 11 on socket 1 00:04:17.488 EAL: Detected lcore 48 as core 12 on socket 1 00:04:17.488 EAL: Detected lcore 49 as core 13 on socket 1 00:04:17.488 EAL: Detected lcore 50 as core 14 on socket 1 00:04:17.488 EAL: Detected lcore 51 as core 15 on socket 1 00:04:17.488 EAL: Detected lcore 52 as core 16 on socket 1 00:04:17.488 EAL: Detected lcore 53 as core 17 on socket 1 00:04:17.488 EAL: Detected lcore 54 as core 18 on socket 1 00:04:17.488 EAL: Detected lcore 55 as core 19 on socket 1 00:04:17.488 EAL: Detected lcore 56 as core 20 on socket 1 00:04:17.488 EAL: Detected lcore 57 as core 21 on socket 1 00:04:17.488 EAL: Detected lcore 58 as core 22 on socket 1 00:04:17.488 EAL: Detected lcore 59 as core 23 on socket 1 00:04:17.488 EAL: Detected lcore 60 as core 24 on socket 1 00:04:17.488 EAL: Detected lcore 61 as core 25 on socket 1 00:04:17.488 EAL: Detected lcore 62 as core 26 on socket 1 00:04:17.488 EAL: Detected lcore 63 as core 27 on socket 1 00:04:17.488 EAL: Detected lcore 64 as core 28 on socket 1 00:04:17.488 EAL: Detected lcore 65 as core 29 on socket 1 00:04:17.488 EAL: Detected lcore 66 as core 30 on socket 1 00:04:17.488 EAL: Detected lcore 67 as core 31 on socket 1 00:04:17.488 EAL: Detected lcore 68 as core 32 on socket 1 00:04:17.488 EAL: Detected lcore 69 as core 33 on socket 1 00:04:17.488 EAL: Detected lcore 70 as core 34 on socket 1 00:04:17.488 EAL: Detected lcore 71 as core 35 on socket 1 00:04:17.488 EAL: Detected lcore 72 as core 0 on socket 0 00:04:17.488 EAL: Detected lcore 73 as core 1 on socket 0 00:04:17.488 EAL: Detected lcore 74 as core 2 on socket 0 00:04:17.488 EAL: Detected lcore 75 as core 3 on socket 0 00:04:17.488 EAL: Detected lcore 76 as core 4 on socket 0 00:04:17.488 EAL: Detected lcore 77 as core 5 on socket 0 00:04:17.488 EAL: Detected lcore 78 as core 6 on socket 0 00:04:17.488 EAL: Detected lcore 79 as core 7 on socket 0 00:04:17.488 EAL: Detected lcore 80 as core 8 on socket 0 00:04:17.488 EAL: Detected lcore 81 as core 9 on socket 0 00:04:17.488 EAL: Detected lcore 82 as core 10 on socket 0 00:04:17.488 EAL: Detected lcore 83 as core 11 on socket 0 00:04:17.488 EAL: Detected lcore 84 as core 12 on socket 0 00:04:17.488 EAL: Detected lcore 85 as core 13 on socket 0 00:04:17.488 EAL: Detected lcore 86 as core 14 on socket 0 00:04:17.488 EAL: Detected lcore 87 as core 15 on socket 0 00:04:17.488 EAL: Detected lcore 88 as core 16 on socket 0 00:04:17.488 EAL: Detected lcore 89 as core 17 on socket 0 00:04:17.488 EAL: Detected lcore 90 as core 18 on socket 0 00:04:17.488 EAL: Detected lcore 91 as core 19 on socket 0 00:04:17.488 EAL: Detected lcore 92 as core 20 on socket 0 00:04:17.488 EAL: Detected lcore 93 as core 21 on socket 0 00:04:17.488 EAL: Detected lcore 94 as core 22 on socket 0 00:04:17.488 EAL: Detected lcore 95 as core 23 on socket 0 00:04:17.488 EAL: Detected lcore 96 as core 24 on socket 0 00:04:17.488 EAL: Detected lcore 97 as core 25 on socket 0 00:04:17.488 EAL: Detected lcore 98 as core 26 on socket 0 00:04:17.488 EAL: Detected lcore 99 as core 27 on socket 0 00:04:17.488 EAL: Detected lcore 100 as core 28 on socket 0 00:04:17.489 EAL: Detected lcore 101 as core 29 on socket 0 00:04:17.489 EAL: Detected lcore 102 as core 30 on socket 0 00:04:17.489 EAL: Detected lcore 103 as core 31 on socket 0 00:04:17.489 EAL: Detected lcore 104 as core 32 on socket 0 00:04:17.489 EAL: Detected lcore 105 as core 33 on socket 0 00:04:17.489 EAL: Detected lcore 106 as core 34 on socket 0 00:04:17.489 EAL: Detected lcore 107 as core 35 on socket 0 00:04:17.489 EAL: Detected lcore 108 as core 0 on socket 1 00:04:17.489 EAL: Detected lcore 109 as core 1 on socket 1 00:04:17.489 EAL: Detected lcore 110 as core 2 on socket 1 00:04:17.489 EAL: Detected lcore 111 as core 3 on socket 1 00:04:17.489 EAL: Detected lcore 112 as core 4 on socket 1 00:04:17.489 EAL: Detected lcore 113 as core 5 on socket 1 00:04:17.489 EAL: Detected lcore 114 as core 6 on socket 1 00:04:17.489 EAL: Detected lcore 115 as core 7 on socket 1 00:04:17.489 EAL: Detected lcore 116 as core 8 on socket 1 00:04:17.489 EAL: Detected lcore 117 as core 9 on socket 1 00:04:17.489 EAL: Detected lcore 118 as core 10 on socket 1 00:04:17.489 EAL: Detected lcore 119 as core 11 on socket 1 00:04:17.489 EAL: Detected lcore 120 as core 12 on socket 1 00:04:17.489 EAL: Detected lcore 121 as core 13 on socket 1 00:04:17.489 EAL: Detected lcore 122 as core 14 on socket 1 00:04:17.489 EAL: Detected lcore 123 as core 15 on socket 1 00:04:17.489 EAL: Detected lcore 124 as core 16 on socket 1 00:04:17.489 EAL: Detected lcore 125 as core 17 on socket 1 00:04:17.489 EAL: Detected lcore 126 as core 18 on socket 1 00:04:17.489 EAL: Detected lcore 127 as core 19 on socket 1 00:04:17.489 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:17.489 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:17.489 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:17.489 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:17.489 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:17.489 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:17.489 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:17.489 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:17.489 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:17.489 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:17.489 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:17.489 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:17.489 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:17.489 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:17.489 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:17.489 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:17.489 EAL: Maximum logical cores by configuration: 128 00:04:17.489 EAL: Detected CPU lcores: 128 00:04:17.489 EAL: Detected NUMA nodes: 2 00:04:17.489 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:17.489 EAL: Detected shared linkage of DPDK 00:04:17.489 EAL: No shared files mode enabled, IPC will be disabled 00:04:17.489 EAL: Bus pci wants IOVA as 'DC' 00:04:17.489 EAL: Buses did not request a specific IOVA mode. 00:04:17.489 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:17.489 EAL: Selected IOVA mode 'VA' 00:04:17.489 EAL: Probing VFIO support... 00:04:17.489 EAL: IOMMU type 1 (Type 1) is supported 00:04:17.489 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:17.489 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:17.489 EAL: VFIO support initialized 00:04:17.489 EAL: Ask a virtual area of 0x2e000 bytes 00:04:17.489 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:17.489 EAL: Setting up physically contiguous memory... 00:04:17.489 EAL: Setting maximum number of open files to 524288 00:04:17.489 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:17.489 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:17.489 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:17.489 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.489 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:17.489 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.489 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.489 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:17.489 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:17.489 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.489 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:17.489 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.489 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.489 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:17.489 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:17.489 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.489 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:17.489 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.489 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.489 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:17.489 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:17.489 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.489 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:17.489 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:17.489 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.489 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:17.489 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:17.489 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:17.489 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.489 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:17.489 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:17.489 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.489 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:17.489 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:17.489 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.489 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:17.489 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:17.489 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.489 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:17.489 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:17.489 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.489 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:17.489 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:17.489 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.489 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:17.489 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:17.489 EAL: Ask a virtual area of 0x61000 bytes 00:04:17.489 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:17.489 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:17.489 EAL: Ask a virtual area of 0x400000000 bytes 00:04:17.489 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:17.489 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:17.489 EAL: Hugepages will be freed exactly as allocated. 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: TSC frequency is ~2400000 KHz 00:04:17.489 EAL: Main lcore 0 is ready (tid=7fd25bd04a00;cpuset=[0]) 00:04:17.489 EAL: Trying to obtain current memory policy. 00:04:17.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.489 EAL: Restoring previous memory policy: 0 00:04:17.489 EAL: request: mp_malloc_sync 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: Heap on socket 0 was expanded by 2MB 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:17.489 EAL: Mem event callback 'spdk:(nil)' registered 00:04:17.489 00:04:17.489 00:04:17.489 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.489 http://cunit.sourceforge.net/ 00:04:17.489 00:04:17.489 00:04:17.489 Suite: components_suite 00:04:17.489 Test: vtophys_malloc_test ...passed 00:04:17.489 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:17.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.489 EAL: Restoring previous memory policy: 4 00:04:17.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.489 EAL: request: mp_malloc_sync 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: Heap on socket 0 was expanded by 4MB 00:04:17.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.489 EAL: request: mp_malloc_sync 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: Heap on socket 0 was shrunk by 4MB 00:04:17.489 EAL: Trying to obtain current memory policy. 00:04:17.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.489 EAL: Restoring previous memory policy: 4 00:04:17.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.489 EAL: request: mp_malloc_sync 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: Heap on socket 0 was expanded by 6MB 00:04:17.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.489 EAL: request: mp_malloc_sync 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: Heap on socket 0 was shrunk by 6MB 00:04:17.489 EAL: Trying to obtain current memory policy. 00:04:17.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.489 EAL: Restoring previous memory policy: 4 00:04:17.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.489 EAL: request: mp_malloc_sync 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: Heap on socket 0 was expanded by 10MB 00:04:17.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.489 EAL: request: mp_malloc_sync 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: Heap on socket 0 was shrunk by 10MB 00:04:17.489 EAL: Trying to obtain current memory policy. 00:04:17.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.489 EAL: Restoring previous memory policy: 4 00:04:17.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.489 EAL: request: mp_malloc_sync 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: Heap on socket 0 was expanded by 18MB 00:04:17.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.489 EAL: request: mp_malloc_sync 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.489 EAL: Heap on socket 0 was shrunk by 18MB 00:04:17.489 EAL: Trying to obtain current memory policy. 00:04:17.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.489 EAL: Restoring previous memory policy: 4 00:04:17.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.489 EAL: request: mp_malloc_sync 00:04:17.489 EAL: No shared files mode enabled, IPC is disabled 00:04:17.490 EAL: Heap on socket 0 was expanded by 34MB 00:04:17.490 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.490 EAL: request: mp_malloc_sync 00:04:17.490 EAL: No shared files mode enabled, IPC is disabled 00:04:17.490 EAL: Heap on socket 0 was shrunk by 34MB 00:04:17.490 EAL: Trying to obtain current memory policy. 00:04:17.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.490 EAL: Restoring previous memory policy: 4 00:04:17.490 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.490 EAL: request: mp_malloc_sync 00:04:17.490 EAL: No shared files mode enabled, IPC is disabled 00:04:17.490 EAL: Heap on socket 0 was expanded by 66MB 00:04:17.490 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.490 EAL: request: mp_malloc_sync 00:04:17.490 EAL: No shared files mode enabled, IPC is disabled 00:04:17.490 EAL: Heap on socket 0 was shrunk by 66MB 00:04:17.490 EAL: Trying to obtain current memory policy. 00:04:17.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.490 EAL: Restoring previous memory policy: 4 00:04:17.490 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.490 EAL: request: mp_malloc_sync 00:04:17.490 EAL: No shared files mode enabled, IPC is disabled 00:04:17.490 EAL: Heap on socket 0 was expanded by 130MB 00:04:17.490 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.490 EAL: request: mp_malloc_sync 00:04:17.490 EAL: No shared files mode enabled, IPC is disabled 00:04:17.490 EAL: Heap on socket 0 was shrunk by 130MB 00:04:17.490 EAL: Trying to obtain current memory policy. 00:04:17.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.490 EAL: Restoring previous memory policy: 4 00:04:17.490 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.490 EAL: request: mp_malloc_sync 00:04:17.490 EAL: No shared files mode enabled, IPC is disabled 00:04:17.490 EAL: Heap on socket 0 was expanded by 258MB 00:04:17.490 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.490 EAL: request: mp_malloc_sync 00:04:17.490 EAL: No shared files mode enabled, IPC is disabled 00:04:17.490 EAL: Heap on socket 0 was shrunk by 258MB 00:04:17.490 EAL: Trying to obtain current memory policy. 00:04:17.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.750 EAL: Restoring previous memory policy: 4 00:04:17.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.750 EAL: request: mp_malloc_sync 00:04:17.750 EAL: No shared files mode enabled, IPC is disabled 00:04:17.750 EAL: Heap on socket 0 was expanded by 514MB 00:04:17.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.750 EAL: request: mp_malloc_sync 00:04:17.750 EAL: No shared files mode enabled, IPC is disabled 00:04:17.750 EAL: Heap on socket 0 was shrunk by 514MB 00:04:17.750 EAL: Trying to obtain current memory policy. 00:04:17.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.011 EAL: Restoring previous memory policy: 4 00:04:18.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.011 EAL: request: mp_malloc_sync 00:04:18.011 EAL: No shared files mode enabled, IPC is disabled 00:04:18.011 EAL: Heap on socket 0 was expanded by 1026MB 00:04:18.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.011 EAL: request: mp_malloc_sync 00:04:18.011 EAL: No shared files mode enabled, IPC is disabled 00:04:18.011 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:18.011 passed 00:04:18.011 00:04:18.011 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.011 suites 1 1 n/a 0 0 00:04:18.011 tests 2 2 2 0 0 00:04:18.011 asserts 497 497 497 0 n/a 00:04:18.011 00:04:18.011 Elapsed time = 0.646 seconds 00:04:18.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.011 EAL: request: mp_malloc_sync 00:04:18.011 EAL: No shared files mode enabled, IPC is disabled 00:04:18.011 EAL: Heap on socket 0 was shrunk by 2MB 00:04:18.011 EAL: No shared files mode enabled, IPC is disabled 00:04:18.011 EAL: No shared files mode enabled, IPC is disabled 00:04:18.011 EAL: No shared files mode enabled, IPC is disabled 00:04:18.011 00:04:18.011 real 0m0.797s 00:04:18.011 user 0m0.406s 00:04:18.011 sys 0m0.359s 00:04:18.011 12:38:57 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.011 12:38:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:18.011 ************************************ 00:04:18.011 END TEST env_vtophys 00:04:18.011 ************************************ 00:04:18.273 12:38:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:18.273 12:38:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.273 12:38:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.273 12:38:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.273 ************************************ 00:04:18.273 START TEST env_pci 00:04:18.273 ************************************ 00:04:18.273 12:38:57 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:18.273 00:04:18.273 00:04:18.273 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.273 http://cunit.sourceforge.net/ 00:04:18.273 00:04:18.273 00:04:18.273 Suite: pci 00:04:18.273 Test: pci_hook ...[2024-11-25 12:38:58.005245] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 374154 has claimed it 00:04:18.273 EAL: Cannot find device (10000:00:01.0) 00:04:18.273 EAL: Failed to attach device on primary process 00:04:18.273 passed 00:04:18.273 00:04:18.273 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.273 suites 1 1 n/a 0 0 00:04:18.273 tests 1 1 1 0 0 00:04:18.273 asserts 25 25 25 0 n/a 00:04:18.273 00:04:18.273 Elapsed time = 0.034 seconds 00:04:18.273 00:04:18.273 real 0m0.056s 00:04:18.273 user 0m0.015s 00:04:18.273 sys 0m0.040s 00:04:18.273 12:38:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.273 12:38:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:18.273 ************************************ 00:04:18.273 END TEST env_pci 00:04:18.273 ************************************ 00:04:18.273 12:38:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:18.273 12:38:58 env -- env/env.sh@15 -- # uname 00:04:18.273 12:38:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:18.273 12:38:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:18.273 12:38:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:18.273 12:38:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:18.273 12:38:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.273 12:38:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.273 ************************************ 00:04:18.273 START TEST env_dpdk_post_init 00:04:18.273 ************************************ 00:04:18.273 12:38:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:18.273 EAL: Detected CPU lcores: 128 00:04:18.273 EAL: Detected NUMA nodes: 2 00:04:18.273 EAL: Detected shared linkage of DPDK 00:04:18.273 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:18.534 EAL: Selected IOVA mode 'VA' 00:04:18.534 EAL: VFIO support initialized 00:04:18.534 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:18.534 EAL: Using IOMMU type 1 (Type 1) 00:04:18.534 EAL: Ignore mapping IO port bar(1) 00:04:18.794 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:18.794 EAL: Ignore mapping IO port bar(1) 00:04:19.055 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:19.055 EAL: Ignore mapping IO port bar(1) 00:04:19.055 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:19.317 EAL: Ignore mapping IO port bar(1) 00:04:19.317 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:19.577 EAL: Ignore mapping IO port bar(1) 00:04:19.577 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:19.837 EAL: Ignore mapping IO port bar(1) 00:04:19.837 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:19.837 EAL: Ignore mapping IO port bar(1) 00:04:20.098 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:20.098 EAL: Ignore mapping IO port bar(1) 00:04:20.359 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:20.620 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:20.620 EAL: Ignore mapping IO port bar(1) 00:04:20.620 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:20.880 EAL: Ignore mapping IO port bar(1) 00:04:20.880 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:21.141 EAL: Ignore mapping IO port bar(1) 00:04:21.141 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:21.402 EAL: Ignore mapping IO port bar(1) 00:04:21.402 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:21.402 EAL: Ignore mapping IO port bar(1) 00:04:21.663 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:21.663 EAL: Ignore mapping IO port bar(1) 00:04:21.924 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:21.924 EAL: Ignore mapping IO port bar(1) 00:04:22.185 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:22.185 EAL: Ignore mapping IO port bar(1) 00:04:22.185 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:22.185 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:22.185 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:22.444 Starting DPDK initialization... 00:04:22.444 Starting SPDK post initialization... 00:04:22.444 SPDK NVMe probe 00:04:22.444 Attaching to 0000:65:00.0 00:04:22.444 Attached to 0000:65:00.0 00:04:22.444 Cleaning up... 00:04:24.359 00:04:24.359 real 0m5.741s 00:04:24.359 user 0m0.110s 00:04:24.359 sys 0m0.175s 00:04:24.359 12:39:03 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.359 12:39:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.359 ************************************ 00:04:24.359 END TEST env_dpdk_post_init 00:04:24.359 ************************************ 00:04:24.359 12:39:03 env -- env/env.sh@26 -- # uname 00:04:24.359 12:39:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:24.359 12:39:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.359 12:39:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.359 12:39:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.359 12:39:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.359 ************************************ 00:04:24.359 START TEST env_mem_callbacks 00:04:24.359 ************************************ 00:04:24.359 12:39:03 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:24.359 EAL: Detected CPU lcores: 128 00:04:24.359 EAL: Detected NUMA nodes: 2 00:04:24.359 EAL: Detected shared linkage of DPDK 00:04:24.359 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.359 EAL: Selected IOVA mode 'VA' 00:04:24.359 EAL: VFIO support initialized 00:04:24.359 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.359 00:04:24.359 00:04:24.359 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.359 http://cunit.sourceforge.net/ 00:04:24.359 00:04:24.359 00:04:24.359 Suite: memory 00:04:24.359 Test: test ... 00:04:24.359 register 0x200000200000 2097152 00:04:24.360 malloc 3145728 00:04:24.360 register 0x200000400000 4194304 00:04:24.360 buf 0x200000500000 len 3145728 PASSED 00:04:24.360 malloc 64 00:04:24.360 buf 0x2000004fff40 len 64 PASSED 00:04:24.360 malloc 4194304 00:04:24.360 register 0x200000800000 6291456 00:04:24.360 buf 0x200000a00000 len 4194304 PASSED 00:04:24.360 free 0x200000500000 3145728 00:04:24.360 free 0x2000004fff40 64 00:04:24.360 unregister 0x200000400000 4194304 PASSED 00:04:24.360 free 0x200000a00000 4194304 00:04:24.360 unregister 0x200000800000 6291456 PASSED 00:04:24.360 malloc 8388608 00:04:24.360 register 0x200000400000 10485760 00:04:24.360 buf 0x200000600000 len 8388608 PASSED 00:04:24.360 free 0x200000600000 8388608 00:04:24.360 unregister 0x200000400000 10485760 PASSED 00:04:24.360 passed 00:04:24.360 00:04:24.360 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.360 suites 1 1 n/a 0 0 00:04:24.360 tests 1 1 1 0 0 00:04:24.360 asserts 15 15 15 0 n/a 00:04:24.360 00:04:24.360 Elapsed time = 0.008 seconds 00:04:24.360 00:04:24.360 real 0m0.069s 00:04:24.360 user 0m0.019s 00:04:24.360 sys 0m0.049s 00:04:24.360 12:39:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.360 12:39:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:24.360 ************************************ 00:04:24.360 END TEST env_mem_callbacks 00:04:24.360 ************************************ 00:04:24.360 00:04:24.360 real 0m7.481s 00:04:24.360 user 0m1.014s 00:04:24.360 sys 0m1.014s 00:04:24.360 12:39:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.360 12:39:04 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.360 ************************************ 00:04:24.360 END TEST env 00:04:24.360 ************************************ 00:04:24.360 12:39:04 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:24.360 12:39:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.360 12:39:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.360 12:39:04 -- common/autotest_common.sh@10 -- # set +x 00:04:24.360 ************************************ 00:04:24.360 START TEST rpc 00:04:24.360 ************************************ 00:04:24.360 12:39:04 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:24.360 * Looking for test storage... 00:04:24.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:24.360 12:39:04 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.360 12:39:04 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.360 12:39:04 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.621 12:39:04 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.621 12:39:04 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.621 12:39:04 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.621 12:39:04 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.621 12:39:04 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.621 12:39:04 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.621 12:39:04 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.621 12:39:04 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.621 12:39:04 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.621 12:39:04 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.621 12:39:04 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.621 12:39:04 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.621 12:39:04 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:24.621 12:39:04 rpc -- scripts/common.sh@345 -- # : 1 00:04:24.621 12:39:04 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.621 12:39:04 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.621 12:39:04 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:24.621 12:39:04 rpc -- scripts/common.sh@353 -- # local d=1 00:04:24.621 12:39:04 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.621 12:39:04 rpc -- scripts/common.sh@355 -- # echo 1 00:04:24.621 12:39:04 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.621 12:39:04 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:24.621 12:39:04 rpc -- scripts/common.sh@353 -- # local d=2 00:04:24.621 12:39:04 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.621 12:39:04 rpc -- scripts/common.sh@355 -- # echo 2 00:04:24.621 12:39:04 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.621 12:39:04 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.621 12:39:04 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.621 12:39:04 rpc -- scripts/common.sh@368 -- # return 0 00:04:24.621 12:39:04 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.621 12:39:04 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.621 --rc genhtml_branch_coverage=1 00:04:24.621 --rc genhtml_function_coverage=1 00:04:24.621 --rc genhtml_legend=1 00:04:24.621 --rc geninfo_all_blocks=1 00:04:24.621 --rc geninfo_unexecuted_blocks=1 00:04:24.621 00:04:24.621 ' 00:04:24.621 12:39:04 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.621 --rc genhtml_branch_coverage=1 00:04:24.621 --rc genhtml_function_coverage=1 00:04:24.621 --rc genhtml_legend=1 00:04:24.621 --rc geninfo_all_blocks=1 00:04:24.621 --rc geninfo_unexecuted_blocks=1 00:04:24.621 00:04:24.621 ' 00:04:24.621 12:39:04 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.621 --rc genhtml_branch_coverage=1 00:04:24.621 --rc genhtml_function_coverage=1 00:04:24.621 --rc genhtml_legend=1 00:04:24.621 --rc geninfo_all_blocks=1 00:04:24.621 --rc geninfo_unexecuted_blocks=1 00:04:24.621 00:04:24.621 ' 00:04:24.621 12:39:04 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.621 --rc genhtml_branch_coverage=1 00:04:24.621 --rc genhtml_function_coverage=1 00:04:24.621 --rc genhtml_legend=1 00:04:24.621 --rc geninfo_all_blocks=1 00:04:24.621 --rc geninfo_unexecuted_blocks=1 00:04:24.621 00:04:24.621 ' 00:04:24.621 12:39:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=375710 00:04:24.621 12:39:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.621 12:39:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 375710 00:04:24.621 12:39:04 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:24.621 12:39:04 rpc -- common/autotest_common.sh@835 -- # '[' -z 375710 ']' 00:04:24.621 12:39:04 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.622 12:39:04 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.622 12:39:04 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.622 12:39:04 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.622 12:39:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.622 [2024-11-25 12:39:04.416199] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:24.622 [2024-11-25 12:39:04.416272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375710 ] 00:04:24.622 [2024-11-25 12:39:04.499649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.883 [2024-11-25 12:39:04.541012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:24.884 [2024-11-25 12:39:04.541047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 375710' to capture a snapshot of events at runtime. 00:04:24.884 [2024-11-25 12:39:04.541055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:24.884 [2024-11-25 12:39:04.541062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:24.884 [2024-11-25 12:39:04.541068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid375710 for offline analysis/debug. 00:04:24.884 [2024-11-25 12:39:04.541700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.455 12:39:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.455 12:39:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:25.455 12:39:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.455 12:39:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.455 12:39:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:25.455 12:39:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:25.455 12:39:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.455 12:39:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.455 12:39:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.455 ************************************ 00:04:25.455 START TEST rpc_integrity 00:04:25.455 ************************************ 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.455 { 00:04:25.455 "name": "Malloc0", 00:04:25.455 "aliases": [ 00:04:25.455 "ad505022-6367-49cf-9f6d-58254b8a8611" 00:04:25.455 ], 00:04:25.455 "product_name": "Malloc disk", 00:04:25.455 "block_size": 512, 00:04:25.455 "num_blocks": 16384, 00:04:25.455 "uuid": "ad505022-6367-49cf-9f6d-58254b8a8611", 00:04:25.455 "assigned_rate_limits": { 00:04:25.455 "rw_ios_per_sec": 0, 00:04:25.455 "rw_mbytes_per_sec": 0, 00:04:25.455 "r_mbytes_per_sec": 0, 00:04:25.455 "w_mbytes_per_sec": 0 00:04:25.455 }, 00:04:25.455 "claimed": false, 00:04:25.455 "zoned": false, 00:04:25.455 "supported_io_types": { 00:04:25.455 "read": true, 00:04:25.455 "write": true, 00:04:25.455 "unmap": true, 00:04:25.455 "flush": true, 00:04:25.455 "reset": true, 00:04:25.455 "nvme_admin": false, 00:04:25.455 "nvme_io": false, 00:04:25.455 "nvme_io_md": false, 00:04:25.455 "write_zeroes": true, 00:04:25.455 "zcopy": true, 00:04:25.455 "get_zone_info": false, 00:04:25.455 "zone_management": false, 00:04:25.455 "zone_append": false, 00:04:25.455 "compare": false, 00:04:25.455 "compare_and_write": false, 00:04:25.455 "abort": true, 00:04:25.455 "seek_hole": false, 00:04:25.455 "seek_data": false, 00:04:25.455 "copy": true, 00:04:25.455 "nvme_iov_md": false 00:04:25.455 }, 00:04:25.455 "memory_domains": [ 00:04:25.455 { 00:04:25.455 "dma_device_id": "system", 00:04:25.455 "dma_device_type": 1 00:04:25.455 }, 00:04:25.455 { 00:04:25.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.455 "dma_device_type": 2 00:04:25.455 } 00:04:25.455 ], 00:04:25.455 "driver_specific": {} 00:04:25.455 } 00:04:25.455 ]' 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.455 [2024-11-25 12:39:05.343912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:25.455 [2024-11-25 12:39:05.343943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.455 [2024-11-25 12:39:05.343956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18ddb90 00:04:25.455 [2024-11-25 12:39:05.343963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.455 [2024-11-25 12:39:05.345322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.455 [2024-11-25 12:39:05.345342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.455 Passthru0 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.455 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.455 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.717 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.717 { 00:04:25.717 "name": "Malloc0", 00:04:25.717 "aliases": [ 00:04:25.717 "ad505022-6367-49cf-9f6d-58254b8a8611" 00:04:25.717 ], 00:04:25.717 "product_name": "Malloc disk", 00:04:25.717 "block_size": 512, 00:04:25.717 "num_blocks": 16384, 00:04:25.717 "uuid": "ad505022-6367-49cf-9f6d-58254b8a8611", 00:04:25.717 "assigned_rate_limits": { 00:04:25.717 "rw_ios_per_sec": 0, 00:04:25.717 "rw_mbytes_per_sec": 0, 00:04:25.717 "r_mbytes_per_sec": 0, 00:04:25.717 "w_mbytes_per_sec": 0 00:04:25.717 }, 00:04:25.717 "claimed": true, 00:04:25.717 "claim_type": "exclusive_write", 00:04:25.717 "zoned": false, 00:04:25.717 "supported_io_types": { 00:04:25.717 "read": true, 00:04:25.717 "write": true, 00:04:25.717 "unmap": true, 00:04:25.717 "flush": true, 00:04:25.717 "reset": true, 00:04:25.717 "nvme_admin": false, 00:04:25.717 "nvme_io": false, 00:04:25.717 "nvme_io_md": false, 00:04:25.717 "write_zeroes": true, 00:04:25.717 "zcopy": true, 00:04:25.717 "get_zone_info": false, 00:04:25.717 "zone_management": false, 00:04:25.717 "zone_append": false, 00:04:25.717 "compare": false, 00:04:25.717 "compare_and_write": false, 00:04:25.717 "abort": true, 00:04:25.717 "seek_hole": false, 00:04:25.717 "seek_data": false, 00:04:25.717 "copy": true, 00:04:25.717 "nvme_iov_md": false 00:04:25.717 }, 00:04:25.717 "memory_domains": [ 00:04:25.717 { 00:04:25.717 "dma_device_id": "system", 00:04:25.717 "dma_device_type": 1 00:04:25.717 }, 00:04:25.717 { 00:04:25.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.717 "dma_device_type": 2 00:04:25.717 } 00:04:25.717 ], 00:04:25.717 "driver_specific": {} 00:04:25.717 }, 00:04:25.717 { 00:04:25.717 "name": "Passthru0", 00:04:25.717 "aliases": [ 00:04:25.717 "678c01ef-4ca8-537c-92a4-1d0bc65523fd" 00:04:25.717 ], 00:04:25.717 "product_name": "passthru", 00:04:25.717 "block_size": 512, 00:04:25.717 "num_blocks": 16384, 00:04:25.717 "uuid": "678c01ef-4ca8-537c-92a4-1d0bc65523fd", 00:04:25.717 "assigned_rate_limits": { 00:04:25.717 "rw_ios_per_sec": 0, 00:04:25.717 "rw_mbytes_per_sec": 0, 00:04:25.717 "r_mbytes_per_sec": 0, 00:04:25.717 "w_mbytes_per_sec": 0 00:04:25.717 }, 00:04:25.717 "claimed": false, 00:04:25.717 "zoned": false, 00:04:25.717 "supported_io_types": { 00:04:25.717 "read": true, 00:04:25.717 "write": true, 00:04:25.717 "unmap": true, 00:04:25.717 "flush": true, 00:04:25.717 "reset": true, 00:04:25.717 "nvme_admin": false, 00:04:25.717 "nvme_io": false, 00:04:25.717 "nvme_io_md": false, 00:04:25.717 "write_zeroes": true, 00:04:25.717 "zcopy": true, 00:04:25.717 "get_zone_info": false, 00:04:25.717 "zone_management": false, 00:04:25.717 "zone_append": false, 00:04:25.717 "compare": false, 00:04:25.717 "compare_and_write": false, 00:04:25.717 "abort": true, 00:04:25.717 "seek_hole": false, 00:04:25.717 "seek_data": false, 00:04:25.717 "copy": true, 00:04:25.717 "nvme_iov_md": false 00:04:25.717 }, 00:04:25.717 "memory_domains": [ 00:04:25.717 { 00:04:25.717 "dma_device_id": "system", 00:04:25.717 "dma_device_type": 1 00:04:25.717 }, 00:04:25.717 { 00:04:25.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.717 "dma_device_type": 2 00:04:25.717 } 00:04:25.717 ], 00:04:25.717 "driver_specific": { 00:04:25.717 "passthru": { 00:04:25.717 "name": "Passthru0", 00:04:25.717 "base_bdev_name": "Malloc0" 00:04:25.717 } 00:04:25.717 } 00:04:25.717 } 00:04:25.717 ]' 00:04:25.717 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:25.717 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.717 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.717 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.717 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.717 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:25.717 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:25.717 12:39:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:25.717 00:04:25.717 real 0m0.290s 00:04:25.717 user 0m0.189s 00:04:25.717 sys 0m0.038s 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.717 12:39:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.717 ************************************ 00:04:25.717 END TEST rpc_integrity 00:04:25.717 ************************************ 00:04:25.717 12:39:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:25.717 12:39:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.717 12:39:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.717 12:39:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.717 ************************************ 00:04:25.717 START TEST rpc_plugins 00:04:25.717 ************************************ 00:04:25.717 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:25.717 12:39:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:25.718 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.718 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.718 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.718 12:39:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:25.718 12:39:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:25.718 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.718 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.718 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.718 12:39:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:25.718 { 00:04:25.718 "name": "Malloc1", 00:04:25.718 "aliases": [ 00:04:25.718 "118f3548-007e-4248-8446-d0dd0260cc43" 00:04:25.718 ], 00:04:25.718 "product_name": "Malloc disk", 00:04:25.718 "block_size": 4096, 00:04:25.718 "num_blocks": 256, 00:04:25.718 "uuid": "118f3548-007e-4248-8446-d0dd0260cc43", 00:04:25.718 "assigned_rate_limits": { 00:04:25.718 "rw_ios_per_sec": 0, 00:04:25.718 "rw_mbytes_per_sec": 0, 00:04:25.718 "r_mbytes_per_sec": 0, 00:04:25.718 "w_mbytes_per_sec": 0 00:04:25.718 }, 00:04:25.718 "claimed": false, 00:04:25.718 "zoned": false, 00:04:25.718 "supported_io_types": { 00:04:25.718 "read": true, 00:04:25.718 "write": true, 00:04:25.718 "unmap": true, 00:04:25.718 "flush": true, 00:04:25.718 "reset": true, 00:04:25.718 "nvme_admin": false, 00:04:25.718 "nvme_io": false, 00:04:25.718 "nvme_io_md": false, 00:04:25.718 "write_zeroes": true, 00:04:25.718 "zcopy": true, 00:04:25.718 "get_zone_info": false, 00:04:25.718 "zone_management": false, 00:04:25.718 "zone_append": false, 00:04:25.718 "compare": false, 00:04:25.718 "compare_and_write": false, 00:04:25.718 "abort": true, 00:04:25.718 "seek_hole": false, 00:04:25.718 "seek_data": false, 00:04:25.718 "copy": true, 00:04:25.718 "nvme_iov_md": false 00:04:25.718 }, 00:04:25.718 "memory_domains": [ 00:04:25.718 { 00:04:25.718 "dma_device_id": "system", 00:04:25.718 "dma_device_type": 1 00:04:25.718 }, 00:04:25.718 { 00:04:25.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.718 "dma_device_type": 2 00:04:25.718 } 00:04:25.718 ], 00:04:25.718 "driver_specific": {} 00:04:25.718 } 00:04:25.718 ]' 00:04:25.718 12:39:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:25.979 12:39:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:25.979 12:39:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:25.979 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.979 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.979 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.979 12:39:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:25.979 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.979 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.979 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.979 12:39:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:25.979 12:39:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:25.979 12:39:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:25.979 00:04:25.979 real 0m0.147s 00:04:25.979 user 0m0.095s 00:04:25.979 sys 0m0.018s 00:04:25.979 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.979 12:39:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:25.979 ************************************ 00:04:25.979 END TEST rpc_plugins 00:04:25.979 ************************************ 00:04:25.979 12:39:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:25.979 12:39:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.979 12:39:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.979 12:39:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.979 ************************************ 00:04:25.979 START TEST rpc_trace_cmd_test 00:04:25.979 ************************************ 00:04:25.979 12:39:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:25.979 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:25.979 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:25.979 12:39:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.979 12:39:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:25.979 12:39:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.979 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:25.979 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid375710", 00:04:25.979 "tpoint_group_mask": "0x8", 00:04:25.979 "iscsi_conn": { 00:04:25.979 "mask": "0x2", 00:04:25.979 "tpoint_mask": "0x0" 00:04:25.979 }, 00:04:25.979 "scsi": { 00:04:25.979 "mask": "0x4", 00:04:25.979 "tpoint_mask": "0x0" 00:04:25.979 }, 00:04:25.979 "bdev": { 00:04:25.980 "mask": "0x8", 00:04:25.980 "tpoint_mask": "0xffffffffffffffff" 00:04:25.980 }, 00:04:25.980 "nvmf_rdma": { 00:04:25.980 "mask": "0x10", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "nvmf_tcp": { 00:04:25.980 "mask": "0x20", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "ftl": { 00:04:25.980 "mask": "0x40", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "blobfs": { 00:04:25.980 "mask": "0x80", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "dsa": { 00:04:25.980 "mask": "0x200", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "thread": { 00:04:25.980 "mask": "0x400", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "nvme_pcie": { 00:04:25.980 "mask": "0x800", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "iaa": { 00:04:25.980 "mask": "0x1000", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "nvme_tcp": { 00:04:25.980 "mask": "0x2000", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "bdev_nvme": { 00:04:25.980 "mask": "0x4000", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "sock": { 00:04:25.980 "mask": "0x8000", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "blob": { 00:04:25.980 "mask": "0x10000", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "bdev_raid": { 00:04:25.980 "mask": "0x20000", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 }, 00:04:25.980 "scheduler": { 00:04:25.980 "mask": "0x40000", 00:04:25.980 "tpoint_mask": "0x0" 00:04:25.980 } 00:04:25.980 }' 00:04:25.980 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:25.980 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:25.980 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:26.241 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:26.241 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:26.241 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:26.241 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:26.241 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:26.241 12:39:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:26.241 12:39:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:26.241 00:04:26.241 real 0m0.232s 00:04:26.241 user 0m0.189s 00:04:26.241 sys 0m0.036s 00:04:26.241 12:39:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.241 12:39:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.241 ************************************ 00:04:26.241 END TEST rpc_trace_cmd_test 00:04:26.241 ************************************ 00:04:26.241 12:39:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:26.241 12:39:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:26.241 12:39:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:26.241 12:39:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.241 12:39:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.241 12:39:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.241 ************************************ 00:04:26.241 START TEST rpc_daemon_integrity 00:04:26.241 ************************************ 00:04:26.241 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:26.241 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.241 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.241 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.241 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.241 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.241 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.502 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.502 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.502 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.502 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.502 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.502 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:26.502 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.502 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.502 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.503 { 00:04:26.503 "name": "Malloc2", 00:04:26.503 "aliases": [ 00:04:26.503 "779f203e-6efb-4e3f-8394-23c52b8e1a69" 00:04:26.503 ], 00:04:26.503 "product_name": "Malloc disk", 00:04:26.503 "block_size": 512, 00:04:26.503 "num_blocks": 16384, 00:04:26.503 "uuid": "779f203e-6efb-4e3f-8394-23c52b8e1a69", 00:04:26.503 "assigned_rate_limits": { 00:04:26.503 "rw_ios_per_sec": 0, 00:04:26.503 "rw_mbytes_per_sec": 0, 00:04:26.503 "r_mbytes_per_sec": 0, 00:04:26.503 "w_mbytes_per_sec": 0 00:04:26.503 }, 00:04:26.503 "claimed": false, 00:04:26.503 "zoned": false, 00:04:26.503 "supported_io_types": { 00:04:26.503 "read": true, 00:04:26.503 "write": true, 00:04:26.503 "unmap": true, 00:04:26.503 "flush": true, 00:04:26.503 "reset": true, 00:04:26.503 "nvme_admin": false, 00:04:26.503 "nvme_io": false, 00:04:26.503 "nvme_io_md": false, 00:04:26.503 "write_zeroes": true, 00:04:26.503 "zcopy": true, 00:04:26.503 "get_zone_info": false, 00:04:26.503 "zone_management": false, 00:04:26.503 "zone_append": false, 00:04:26.503 "compare": false, 00:04:26.503 "compare_and_write": false, 00:04:26.503 "abort": true, 00:04:26.503 "seek_hole": false, 00:04:26.503 "seek_data": false, 00:04:26.503 "copy": true, 00:04:26.503 "nvme_iov_md": false 00:04:26.503 }, 00:04:26.503 "memory_domains": [ 00:04:26.503 { 00:04:26.503 "dma_device_id": "system", 00:04:26.503 "dma_device_type": 1 00:04:26.503 }, 00:04:26.503 { 00:04:26.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.503 "dma_device_type": 2 00:04:26.503 } 00:04:26.503 ], 00:04:26.503 "driver_specific": {} 00:04:26.503 } 00:04:26.503 ]' 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.503 [2024-11-25 12:39:06.238333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:26.503 [2024-11-25 12:39:06.238359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.503 [2024-11-25 12:39:06.238372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x196ee90 00:04:26.503 [2024-11-25 12:39:06.238378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.503 [2024-11-25 12:39:06.239636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.503 [2024-11-25 12:39:06.239655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.503 Passthru0 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.503 { 00:04:26.503 "name": "Malloc2", 00:04:26.503 "aliases": [ 00:04:26.503 "779f203e-6efb-4e3f-8394-23c52b8e1a69" 00:04:26.503 ], 00:04:26.503 "product_name": "Malloc disk", 00:04:26.503 "block_size": 512, 00:04:26.503 "num_blocks": 16384, 00:04:26.503 "uuid": "779f203e-6efb-4e3f-8394-23c52b8e1a69", 00:04:26.503 "assigned_rate_limits": { 00:04:26.503 "rw_ios_per_sec": 0, 00:04:26.503 "rw_mbytes_per_sec": 0, 00:04:26.503 "r_mbytes_per_sec": 0, 00:04:26.503 "w_mbytes_per_sec": 0 00:04:26.503 }, 00:04:26.503 "claimed": true, 00:04:26.503 "claim_type": "exclusive_write", 00:04:26.503 "zoned": false, 00:04:26.503 "supported_io_types": { 00:04:26.503 "read": true, 00:04:26.503 "write": true, 00:04:26.503 "unmap": true, 00:04:26.503 "flush": true, 00:04:26.503 "reset": true, 00:04:26.503 "nvme_admin": false, 00:04:26.503 "nvme_io": false, 00:04:26.503 "nvme_io_md": false, 00:04:26.503 "write_zeroes": true, 00:04:26.503 "zcopy": true, 00:04:26.503 "get_zone_info": false, 00:04:26.503 "zone_management": false, 00:04:26.503 "zone_append": false, 00:04:26.503 "compare": false, 00:04:26.503 "compare_and_write": false, 00:04:26.503 "abort": true, 00:04:26.503 "seek_hole": false, 00:04:26.503 "seek_data": false, 00:04:26.503 "copy": true, 00:04:26.503 "nvme_iov_md": false 00:04:26.503 }, 00:04:26.503 "memory_domains": [ 00:04:26.503 { 00:04:26.503 "dma_device_id": "system", 00:04:26.503 "dma_device_type": 1 00:04:26.503 }, 00:04:26.503 { 00:04:26.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.503 "dma_device_type": 2 00:04:26.503 } 00:04:26.503 ], 00:04:26.503 "driver_specific": {} 00:04:26.503 }, 00:04:26.503 { 00:04:26.503 "name": "Passthru0", 00:04:26.503 "aliases": [ 00:04:26.503 "e52ced04-53db-54d7-917d-cb53d0f3e2a1" 00:04:26.503 ], 00:04:26.503 "product_name": "passthru", 00:04:26.503 "block_size": 512, 00:04:26.503 "num_blocks": 16384, 00:04:26.503 "uuid": "e52ced04-53db-54d7-917d-cb53d0f3e2a1", 00:04:26.503 "assigned_rate_limits": { 00:04:26.503 "rw_ios_per_sec": 0, 00:04:26.503 "rw_mbytes_per_sec": 0, 00:04:26.503 "r_mbytes_per_sec": 0, 00:04:26.503 "w_mbytes_per_sec": 0 00:04:26.503 }, 00:04:26.503 "claimed": false, 00:04:26.503 "zoned": false, 00:04:26.503 "supported_io_types": { 00:04:26.503 "read": true, 00:04:26.503 "write": true, 00:04:26.503 "unmap": true, 00:04:26.503 "flush": true, 00:04:26.503 "reset": true, 00:04:26.503 "nvme_admin": false, 00:04:26.503 "nvme_io": false, 00:04:26.503 "nvme_io_md": false, 00:04:26.503 "write_zeroes": true, 00:04:26.503 "zcopy": true, 00:04:26.503 "get_zone_info": false, 00:04:26.503 "zone_management": false, 00:04:26.503 "zone_append": false, 00:04:26.503 "compare": false, 00:04:26.503 "compare_and_write": false, 00:04:26.503 "abort": true, 00:04:26.503 "seek_hole": false, 00:04:26.503 "seek_data": false, 00:04:26.503 "copy": true, 00:04:26.503 "nvme_iov_md": false 00:04:26.503 }, 00:04:26.503 "memory_domains": [ 00:04:26.503 { 00:04:26.503 "dma_device_id": "system", 00:04:26.503 "dma_device_type": 1 00:04:26.503 }, 00:04:26.503 { 00:04:26.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.503 "dma_device_type": 2 00:04:26.503 } 00:04:26.503 ], 00:04:26.503 "driver_specific": { 00:04:26.503 "passthru": { 00:04:26.503 "name": "Passthru0", 00:04:26.503 "base_bdev_name": "Malloc2" 00:04:26.503 } 00:04:26.503 } 00:04:26.503 } 00:04:26.503 ]' 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.503 00:04:26.503 real 0m0.300s 00:04:26.503 user 0m0.194s 00:04:26.503 sys 0m0.039s 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.503 12:39:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.503 ************************************ 00:04:26.503 END TEST rpc_daemon_integrity 00:04:26.503 ************************************ 00:04:26.764 12:39:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:26.764 12:39:06 rpc -- rpc/rpc.sh@84 -- # killprocess 375710 00:04:26.764 12:39:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 375710 ']' 00:04:26.764 12:39:06 rpc -- common/autotest_common.sh@958 -- # kill -0 375710 00:04:26.764 12:39:06 rpc -- common/autotest_common.sh@959 -- # uname 00:04:26.764 12:39:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.764 12:39:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375710 00:04:26.764 12:39:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.764 12:39:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.764 12:39:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375710' 00:04:26.764 killing process with pid 375710 00:04:26.764 12:39:06 rpc -- common/autotest_common.sh@973 -- # kill 375710 00:04:26.764 12:39:06 rpc -- common/autotest_common.sh@978 -- # wait 375710 00:04:27.025 00:04:27.025 real 0m2.554s 00:04:27.025 user 0m3.313s 00:04:27.025 sys 0m0.717s 00:04:27.025 12:39:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.025 12:39:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.025 ************************************ 00:04:27.025 END TEST rpc 00:04:27.025 ************************************ 00:04:27.025 12:39:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:27.025 12:39:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.025 12:39:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.025 12:39:06 -- common/autotest_common.sh@10 -- # set +x 00:04:27.025 ************************************ 00:04:27.025 START TEST skip_rpc 00:04:27.025 ************************************ 00:04:27.025 12:39:06 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:27.025 * Looking for test storage... 00:04:27.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.025 12:39:06 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.025 12:39:06 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.025 12:39:06 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.287 12:39:06 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.287 12:39:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:27.287 12:39:06 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.287 12:39:06 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.287 --rc genhtml_branch_coverage=1 00:04:27.287 --rc genhtml_function_coverage=1 00:04:27.287 --rc genhtml_legend=1 00:04:27.287 --rc geninfo_all_blocks=1 00:04:27.287 --rc geninfo_unexecuted_blocks=1 00:04:27.287 00:04:27.287 ' 00:04:27.287 12:39:06 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.288 --rc genhtml_branch_coverage=1 00:04:27.288 --rc genhtml_function_coverage=1 00:04:27.288 --rc genhtml_legend=1 00:04:27.288 --rc geninfo_all_blocks=1 00:04:27.288 --rc geninfo_unexecuted_blocks=1 00:04:27.288 00:04:27.288 ' 00:04:27.288 12:39:06 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.288 --rc genhtml_branch_coverage=1 00:04:27.288 --rc genhtml_function_coverage=1 00:04:27.288 --rc genhtml_legend=1 00:04:27.288 --rc geninfo_all_blocks=1 00:04:27.288 --rc geninfo_unexecuted_blocks=1 00:04:27.288 00:04:27.288 ' 00:04:27.288 12:39:06 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.288 --rc genhtml_branch_coverage=1 00:04:27.288 --rc genhtml_function_coverage=1 00:04:27.288 --rc genhtml_legend=1 00:04:27.288 --rc geninfo_all_blocks=1 00:04:27.288 --rc geninfo_unexecuted_blocks=1 00:04:27.288 00:04:27.288 ' 00:04:27.288 12:39:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:27.288 12:39:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.288 12:39:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:27.288 12:39:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.288 12:39:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.288 12:39:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.288 ************************************ 00:04:27.288 START TEST skip_rpc 00:04:27.288 ************************************ 00:04:27.288 12:39:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:27.288 12:39:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=376248 00:04:27.288 12:39:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.288 12:39:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:27.288 12:39:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:27.288 [2024-11-25 12:39:07.073228] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:27.288 [2024-11-25 12:39:07.073275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376248 ] 00:04:27.288 [2024-11-25 12:39:07.151762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.288 [2024-11-25 12:39:07.188160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 376248 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 376248 ']' 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 376248 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376248 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376248' 00:04:32.579 killing process with pid 376248 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 376248 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 376248 00:04:32.579 00:04:32.579 real 0m5.287s 00:04:32.579 user 0m5.078s 00:04:32.579 sys 0m0.249s 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.579 12:39:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.579 ************************************ 00:04:32.579 END TEST skip_rpc 00:04:32.579 ************************************ 00:04:32.579 12:39:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:32.579 12:39:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.579 12:39:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.579 12:39:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.579 ************************************ 00:04:32.579 START TEST skip_rpc_with_json 00:04:32.579 ************************************ 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=377867 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 377867 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 377867 ']' 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.579 12:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.579 [2024-11-25 12:39:12.437022] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:32.579 [2024-11-25 12:39:12.437073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid377867 ] 00:04:32.840 [2024-11-25 12:39:12.515794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.840 [2024-11-25 12:39:12.551457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.412 [2024-11-25 12:39:13.222821] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:33.412 request: 00:04:33.412 { 00:04:33.412 "trtype": "tcp", 00:04:33.412 "method": "nvmf_get_transports", 00:04:33.412 "req_id": 1 00:04:33.412 } 00:04:33.412 Got JSON-RPC error response 00:04:33.412 response: 00:04:33.412 { 00:04:33.412 "code": -19, 00:04:33.412 "message": "No such device" 00:04:33.412 } 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.412 [2024-11-25 12:39:13.234953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.412 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.674 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.674 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:33.674 { 00:04:33.674 "subsystems": [ 00:04:33.674 { 00:04:33.674 "subsystem": "fsdev", 00:04:33.674 "config": [ 00:04:33.674 { 00:04:33.674 "method": "fsdev_set_opts", 00:04:33.674 "params": { 00:04:33.674 "fsdev_io_pool_size": 65535, 00:04:33.675 "fsdev_io_cache_size": 256 00:04:33.675 } 00:04:33.675 } 00:04:33.675 ] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "vfio_user_target", 00:04:33.675 "config": null 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "keyring", 00:04:33.675 "config": [] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "iobuf", 00:04:33.675 "config": [ 00:04:33.675 { 00:04:33.675 "method": "iobuf_set_options", 00:04:33.675 "params": { 00:04:33.675 "small_pool_count": 8192, 00:04:33.675 "large_pool_count": 1024, 00:04:33.675 "small_bufsize": 8192, 00:04:33.675 "large_bufsize": 135168, 00:04:33.675 "enable_numa": false 00:04:33.675 } 00:04:33.675 } 00:04:33.675 ] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "sock", 00:04:33.675 "config": [ 00:04:33.675 { 00:04:33.675 "method": "sock_set_default_impl", 00:04:33.675 "params": { 00:04:33.675 "impl_name": "posix" 00:04:33.675 } 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "method": "sock_impl_set_options", 00:04:33.675 "params": { 00:04:33.675 "impl_name": "ssl", 00:04:33.675 "recv_buf_size": 4096, 00:04:33.675 "send_buf_size": 4096, 00:04:33.675 "enable_recv_pipe": true, 00:04:33.675 "enable_quickack": false, 00:04:33.675 "enable_placement_id": 0, 00:04:33.675 "enable_zerocopy_send_server": true, 00:04:33.675 "enable_zerocopy_send_client": false, 00:04:33.675 "zerocopy_threshold": 0, 00:04:33.675 "tls_version": 0, 00:04:33.675 "enable_ktls": false 00:04:33.675 } 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "method": "sock_impl_set_options", 00:04:33.675 "params": { 00:04:33.675 "impl_name": "posix", 00:04:33.675 "recv_buf_size": 2097152, 00:04:33.675 "send_buf_size": 2097152, 00:04:33.675 "enable_recv_pipe": true, 00:04:33.675 "enable_quickack": false, 00:04:33.675 "enable_placement_id": 0, 00:04:33.675 "enable_zerocopy_send_server": true, 00:04:33.675 "enable_zerocopy_send_client": false, 00:04:33.675 "zerocopy_threshold": 0, 00:04:33.675 "tls_version": 0, 00:04:33.675 "enable_ktls": false 00:04:33.675 } 00:04:33.675 } 00:04:33.675 ] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "vmd", 00:04:33.675 "config": [] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "accel", 00:04:33.675 "config": [ 00:04:33.675 { 00:04:33.675 "method": "accel_set_options", 00:04:33.675 "params": { 00:04:33.675 "small_cache_size": 128, 00:04:33.675 "large_cache_size": 16, 00:04:33.675 "task_count": 2048, 00:04:33.675 "sequence_count": 2048, 00:04:33.675 "buf_count": 2048 00:04:33.675 } 00:04:33.675 } 00:04:33.675 ] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "bdev", 00:04:33.675 "config": [ 00:04:33.675 { 00:04:33.675 "method": "bdev_set_options", 00:04:33.675 "params": { 00:04:33.675 "bdev_io_pool_size": 65535, 00:04:33.675 "bdev_io_cache_size": 256, 00:04:33.675 "bdev_auto_examine": true, 00:04:33.675 "iobuf_small_cache_size": 128, 00:04:33.675 "iobuf_large_cache_size": 16 00:04:33.675 } 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "method": "bdev_raid_set_options", 00:04:33.675 "params": { 00:04:33.675 "process_window_size_kb": 1024, 00:04:33.675 "process_max_bandwidth_mb_sec": 0 00:04:33.675 } 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "method": "bdev_iscsi_set_options", 00:04:33.675 "params": { 00:04:33.675 "timeout_sec": 30 00:04:33.675 } 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "method": "bdev_nvme_set_options", 00:04:33.675 "params": { 00:04:33.675 "action_on_timeout": "none", 00:04:33.675 "timeout_us": 0, 00:04:33.675 "timeout_admin_us": 0, 00:04:33.675 "keep_alive_timeout_ms": 10000, 00:04:33.675 "arbitration_burst": 0, 00:04:33.675 "low_priority_weight": 0, 00:04:33.675 "medium_priority_weight": 0, 00:04:33.675 "high_priority_weight": 0, 00:04:33.675 "nvme_adminq_poll_period_us": 10000, 00:04:33.675 "nvme_ioq_poll_period_us": 0, 00:04:33.675 "io_queue_requests": 0, 00:04:33.675 "delay_cmd_submit": true, 00:04:33.675 "transport_retry_count": 4, 00:04:33.675 "bdev_retry_count": 3, 00:04:33.675 "transport_ack_timeout": 0, 00:04:33.675 "ctrlr_loss_timeout_sec": 0, 00:04:33.675 "reconnect_delay_sec": 0, 00:04:33.675 "fast_io_fail_timeout_sec": 0, 00:04:33.675 "disable_auto_failback": false, 00:04:33.675 "generate_uuids": false, 00:04:33.675 "transport_tos": 0, 00:04:33.675 "nvme_error_stat": false, 00:04:33.675 "rdma_srq_size": 0, 00:04:33.675 "io_path_stat": false, 00:04:33.675 "allow_accel_sequence": false, 00:04:33.675 "rdma_max_cq_size": 0, 00:04:33.675 "rdma_cm_event_timeout_ms": 0, 00:04:33.675 "dhchap_digests": [ 00:04:33.675 "sha256", 00:04:33.675 "sha384", 00:04:33.675 "sha512" 00:04:33.675 ], 00:04:33.675 "dhchap_dhgroups": [ 00:04:33.675 "null", 00:04:33.675 "ffdhe2048", 00:04:33.675 "ffdhe3072", 00:04:33.675 "ffdhe4096", 00:04:33.675 "ffdhe6144", 00:04:33.675 "ffdhe8192" 00:04:33.675 ] 00:04:33.675 } 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "method": "bdev_nvme_set_hotplug", 00:04:33.675 "params": { 00:04:33.675 "period_us": 100000, 00:04:33.675 "enable": false 00:04:33.675 } 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "method": "bdev_wait_for_examine" 00:04:33.675 } 00:04:33.675 ] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "scsi", 00:04:33.675 "config": null 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "scheduler", 00:04:33.675 "config": [ 00:04:33.675 { 00:04:33.675 "method": "framework_set_scheduler", 00:04:33.675 "params": { 00:04:33.675 "name": "static" 00:04:33.675 } 00:04:33.675 } 00:04:33.675 ] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "vhost_scsi", 00:04:33.675 "config": [] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "vhost_blk", 00:04:33.675 "config": [] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "ublk", 00:04:33.675 "config": [] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "nbd", 00:04:33.675 "config": [] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "nvmf", 00:04:33.675 "config": [ 00:04:33.675 { 00:04:33.675 "method": "nvmf_set_config", 00:04:33.675 "params": { 00:04:33.675 "discovery_filter": "match_any", 00:04:33.675 "admin_cmd_passthru": { 00:04:33.675 "identify_ctrlr": false 00:04:33.675 }, 00:04:33.675 "dhchap_digests": [ 00:04:33.675 "sha256", 00:04:33.675 "sha384", 00:04:33.675 "sha512" 00:04:33.675 ], 00:04:33.675 "dhchap_dhgroups": [ 00:04:33.675 "null", 00:04:33.675 "ffdhe2048", 00:04:33.675 "ffdhe3072", 00:04:33.675 "ffdhe4096", 00:04:33.675 "ffdhe6144", 00:04:33.675 "ffdhe8192" 00:04:33.675 ] 00:04:33.675 } 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "method": "nvmf_set_max_subsystems", 00:04:33.675 "params": { 00:04:33.675 "max_subsystems": 1024 00:04:33.675 } 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "method": "nvmf_set_crdt", 00:04:33.675 "params": { 00:04:33.675 "crdt1": 0, 00:04:33.675 "crdt2": 0, 00:04:33.675 "crdt3": 0 00:04:33.675 } 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "method": "nvmf_create_transport", 00:04:33.675 "params": { 00:04:33.675 "trtype": "TCP", 00:04:33.675 "max_queue_depth": 128, 00:04:33.675 "max_io_qpairs_per_ctrlr": 127, 00:04:33.675 "in_capsule_data_size": 4096, 00:04:33.675 "max_io_size": 131072, 00:04:33.675 "io_unit_size": 131072, 00:04:33.675 "max_aq_depth": 128, 00:04:33.675 "num_shared_buffers": 511, 00:04:33.675 "buf_cache_size": 4294967295, 00:04:33.675 "dif_insert_or_strip": false, 00:04:33.675 "zcopy": false, 00:04:33.675 "c2h_success": true, 00:04:33.675 "sock_priority": 0, 00:04:33.675 "abort_timeout_sec": 1, 00:04:33.675 "ack_timeout": 0, 00:04:33.675 "data_wr_pool_size": 0 00:04:33.675 } 00:04:33.675 } 00:04:33.675 ] 00:04:33.675 }, 00:04:33.675 { 00:04:33.675 "subsystem": "iscsi", 00:04:33.675 "config": [ 00:04:33.675 { 00:04:33.675 "method": "iscsi_set_options", 00:04:33.675 "params": { 00:04:33.675 "node_base": "iqn.2016-06.io.spdk", 00:04:33.675 "max_sessions": 128, 00:04:33.675 "max_connections_per_session": 2, 00:04:33.675 "max_queue_depth": 64, 00:04:33.675 "default_time2wait": 2, 00:04:33.675 "default_time2retain": 20, 00:04:33.675 "first_burst_length": 8192, 00:04:33.675 "immediate_data": true, 00:04:33.675 "allow_duplicated_isid": false, 00:04:33.675 "error_recovery_level": 0, 00:04:33.675 "nop_timeout": 60, 00:04:33.675 "nop_in_interval": 30, 00:04:33.675 "disable_chap": false, 00:04:33.675 "require_chap": false, 00:04:33.675 "mutual_chap": false, 00:04:33.675 "chap_group": 0, 00:04:33.675 "max_large_datain_per_connection": 64, 00:04:33.675 "max_r2t_per_connection": 4, 00:04:33.675 "pdu_pool_size": 36864, 00:04:33.676 "immediate_data_pool_size": 16384, 00:04:33.676 "data_out_pool_size": 2048 00:04:33.676 } 00:04:33.676 } 00:04:33.676 ] 00:04:33.676 } 00:04:33.676 ] 00:04:33.676 } 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 377867 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 377867 ']' 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 377867 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 377867 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 377867' 00:04:33.676 killing process with pid 377867 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 377867 00:04:33.676 12:39:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 377867 00:04:33.937 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:33.937 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=378079 00:04:33.937 12:39:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 378079 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 378079 ']' 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 378079 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378079 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378079' 00:04:39.226 killing process with pid 378079 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 378079 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 378079 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:39.226 00:04:39.226 real 0m6.590s 00:04:39.226 user 0m6.537s 00:04:39.226 sys 0m0.525s 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.226 12:39:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.226 ************************************ 00:04:39.226 END TEST skip_rpc_with_json 00:04:39.226 ************************************ 00:04:39.226 12:39:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:39.226 12:39:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.226 12:39:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.226 12:39:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.226 ************************************ 00:04:39.226 START TEST skip_rpc_with_delay 00:04:39.226 ************************************ 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.226 [2024-11-25 12:39:19.099529] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.226 00:04:39.226 real 0m0.075s 00:04:39.226 user 0m0.043s 00:04:39.226 sys 0m0.031s 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.226 12:39:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:39.226 ************************************ 00:04:39.226 END TEST skip_rpc_with_delay 00:04:39.226 ************************************ 00:04:39.495 12:39:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:39.495 12:39:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:39.495 12:39:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:39.495 12:39:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.495 12:39:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.495 12:39:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.495 ************************************ 00:04:39.495 START TEST exit_on_failed_rpc_init 00:04:39.495 ************************************ 00:04:39.495 12:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:39.495 12:39:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=379326 00:04:39.495 12:39:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 379326 00:04:39.495 12:39:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.495 12:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 379326 ']' 00:04:39.495 12:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.495 12:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.495 12:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.495 12:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.495 12:39:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.495 [2024-11-25 12:39:19.268549] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:39.495 [2024-11-25 12:39:19.268614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid379326 ] 00:04:39.495 [2024-11-25 12:39:19.351494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.495 [2024-11-25 12:39:19.393449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:40.173 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.435 [2024-11-25 12:39:20.095575] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:40.435 [2024-11-25 12:39:20.095629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid379481 ] 00:04:40.435 [2024-11-25 12:39:20.192252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.435 [2024-11-25 12:39:20.227723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.435 [2024-11-25 12:39:20.227779] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:40.435 [2024-11-25 12:39:20.227788] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:40.435 [2024-11-25 12:39:20.227795] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 379326 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 379326 ']' 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 379326 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 379326 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 379326' 00:04:40.435 killing process with pid 379326 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 379326 00:04:40.435 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 379326 00:04:40.696 00:04:40.696 real 0m1.338s 00:04:40.696 user 0m1.526s 00:04:40.696 sys 0m0.405s 00:04:40.696 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.696 12:39:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.696 ************************************ 00:04:40.696 END TEST exit_on_failed_rpc_init 00:04:40.696 ************************************ 00:04:40.696 12:39:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.696 00:04:40.696 real 0m13.798s 00:04:40.696 user 0m13.416s 00:04:40.696 sys 0m1.516s 00:04:40.696 12:39:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.696 12:39:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.696 ************************************ 00:04:40.696 END TEST skip_rpc 00:04:40.696 ************************************ 00:04:40.958 12:39:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:40.958 12:39:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.958 12:39:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.958 12:39:20 -- common/autotest_common.sh@10 -- # set +x 00:04:40.958 ************************************ 00:04:40.958 START TEST rpc_client 00:04:40.958 ************************************ 00:04:40.958 12:39:20 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:40.958 * Looking for test storage... 00:04:40.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:40.958 12:39:20 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.958 12:39:20 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.958 12:39:20 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.958 12:39:20 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.958 12:39:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:40.958 12:39:20 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.958 12:39:20 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.958 --rc genhtml_branch_coverage=1 00:04:40.958 --rc genhtml_function_coverage=1 00:04:40.958 --rc genhtml_legend=1 00:04:40.958 --rc geninfo_all_blocks=1 00:04:40.958 --rc geninfo_unexecuted_blocks=1 00:04:40.958 00:04:40.958 ' 00:04:40.958 12:39:20 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.958 --rc genhtml_branch_coverage=1 00:04:40.958 --rc genhtml_function_coverage=1 00:04:40.958 --rc genhtml_legend=1 00:04:40.958 --rc geninfo_all_blocks=1 00:04:40.958 --rc geninfo_unexecuted_blocks=1 00:04:40.958 00:04:40.958 ' 00:04:40.958 12:39:20 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.958 --rc genhtml_branch_coverage=1 00:04:40.958 --rc genhtml_function_coverage=1 00:04:40.958 --rc genhtml_legend=1 00:04:40.958 --rc geninfo_all_blocks=1 00:04:40.958 --rc geninfo_unexecuted_blocks=1 00:04:40.958 00:04:40.958 ' 00:04:40.958 12:39:20 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.958 --rc genhtml_branch_coverage=1 00:04:40.958 --rc genhtml_function_coverage=1 00:04:40.958 --rc genhtml_legend=1 00:04:40.958 --rc geninfo_all_blocks=1 00:04:40.958 --rc geninfo_unexecuted_blocks=1 00:04:40.958 00:04:40.958 ' 00:04:40.958 12:39:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:41.220 OK 00:04:41.220 12:39:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:41.220 00:04:41.220 real 0m0.229s 00:04:41.220 user 0m0.139s 00:04:41.220 sys 0m0.104s 00:04:41.220 12:39:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.220 12:39:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:41.220 ************************************ 00:04:41.220 END TEST rpc_client 00:04:41.220 ************************************ 00:04:41.220 12:39:20 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:41.220 12:39:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.220 12:39:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.220 12:39:20 -- common/autotest_common.sh@10 -- # set +x 00:04:41.220 ************************************ 00:04:41.220 START TEST json_config 00:04:41.220 ************************************ 00:04:41.220 12:39:20 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:41.220 12:39:21 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.220 12:39:21 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.220 12:39:21 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.220 12:39:21 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.220 12:39:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.220 12:39:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.220 12:39:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.220 12:39:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.220 12:39:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.220 12:39:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.220 12:39:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.220 12:39:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.220 12:39:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.220 12:39:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.220 12:39:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.220 12:39:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:41.220 12:39:21 json_config -- scripts/common.sh@345 -- # : 1 00:04:41.220 12:39:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.220 12:39:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.220 12:39:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:41.483 12:39:21 json_config -- scripts/common.sh@353 -- # local d=1 00:04:41.483 12:39:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.483 12:39:21 json_config -- scripts/common.sh@355 -- # echo 1 00:04:41.483 12:39:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.483 12:39:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:41.483 12:39:21 json_config -- scripts/common.sh@353 -- # local d=2 00:04:41.483 12:39:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.483 12:39:21 json_config -- scripts/common.sh@355 -- # echo 2 00:04:41.483 12:39:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.483 12:39:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.483 12:39:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.483 12:39:21 json_config -- scripts/common.sh@368 -- # return 0 00:04:41.483 12:39:21 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.483 12:39:21 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.483 --rc genhtml_branch_coverage=1 00:04:41.483 --rc genhtml_function_coverage=1 00:04:41.483 --rc genhtml_legend=1 00:04:41.483 --rc geninfo_all_blocks=1 00:04:41.483 --rc geninfo_unexecuted_blocks=1 00:04:41.483 00:04:41.483 ' 00:04:41.483 12:39:21 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.483 --rc genhtml_branch_coverage=1 00:04:41.483 --rc genhtml_function_coverage=1 00:04:41.483 --rc genhtml_legend=1 00:04:41.483 --rc geninfo_all_blocks=1 00:04:41.483 --rc geninfo_unexecuted_blocks=1 00:04:41.483 00:04:41.483 ' 00:04:41.483 12:39:21 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.483 --rc genhtml_branch_coverage=1 00:04:41.483 --rc genhtml_function_coverage=1 00:04:41.483 --rc genhtml_legend=1 00:04:41.483 --rc geninfo_all_blocks=1 00:04:41.483 --rc geninfo_unexecuted_blocks=1 00:04:41.483 00:04:41.483 ' 00:04:41.483 12:39:21 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.483 --rc genhtml_branch_coverage=1 00:04:41.483 --rc genhtml_function_coverage=1 00:04:41.483 --rc genhtml_legend=1 00:04:41.483 --rc geninfo_all_blocks=1 00:04:41.483 --rc geninfo_unexecuted_blocks=1 00:04:41.483 00:04:41.483 ' 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.483 12:39:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.483 12:39:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.483 12:39:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.483 12:39:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.483 12:39:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.483 12:39:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.483 12:39:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.483 12:39:21 json_config -- paths/export.sh@5 -- # export PATH 00:04:41.483 12:39:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@51 -- # : 0 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.483 12:39:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:41.483 INFO: JSON configuration test init 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:41.483 12:39:21 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:41.483 12:39:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.484 12:39:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.484 12:39:21 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:41.484 12:39:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.484 12:39:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.484 12:39:21 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:41.484 12:39:21 json_config -- json_config/common.sh@9 -- # local app=target 00:04:41.484 12:39:21 json_config -- json_config/common.sh@10 -- # shift 00:04:41.484 12:39:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.484 12:39:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.484 12:39:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.484 12:39:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.484 12:39:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.484 12:39:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=379936 00:04:41.484 12:39:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.484 Waiting for target to run... 00:04:41.484 12:39:21 json_config -- json_config/common.sh@25 -- # waitforlisten 379936 /var/tmp/spdk_tgt.sock 00:04:41.484 12:39:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 379936 ']' 00:04:41.484 12:39:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.484 12:39:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:41.484 12:39:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.484 12:39:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.484 12:39:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.484 12:39:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.484 [2024-11-25 12:39:21.249037] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:41.484 [2024-11-25 12:39:21.249108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid379936 ] 00:04:41.744 [2024-11-25 12:39:21.584339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.745 [2024-11-25 12:39:21.617319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.316 12:39:22 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.316 12:39:22 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:42.316 12:39:22 json_config -- json_config/common.sh@26 -- # echo '' 00:04:42.316 00:04:42.316 12:39:22 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:42.316 12:39:22 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:42.316 12:39:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.316 12:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.316 12:39:22 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:42.316 12:39:22 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:42.316 12:39:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.317 12:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.317 12:39:22 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:42.317 12:39:22 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:42.317 12:39:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:42.888 12:39:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:42.888 12:39:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:42.888 12:39:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.888 12:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.888 12:39:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:42.888 12:39:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:42.888 12:39:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:42.888 12:39:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:42.888 12:39:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:42.888 12:39:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:42.888 12:39:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:42.888 12:39:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@54 -- # sort 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:43.148 12:39:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.148 12:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:43.148 12:39:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.148 12:39:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:43.148 12:39:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:43.148 12:39:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:43.409 MallocForNvmf0 00:04:43.409 12:39:23 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:43.409 12:39:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:43.409 MallocForNvmf1 00:04:43.409 12:39:23 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:43.409 12:39:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:43.669 [2024-11-25 12:39:23.421727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.669 12:39:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:43.669 12:39:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:43.929 12:39:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.929 12:39:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.929 12:39:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.929 12:39:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:44.190 12:39:23 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:44.190 12:39:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:44.451 [2024-11-25 12:39:24.123976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:44.451 12:39:24 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:44.451 12:39:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.451 12:39:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.451 12:39:24 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:44.451 12:39:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.451 12:39:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.451 12:39:24 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:44.451 12:39:24 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.451 12:39:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.712 MallocBdevForConfigChangeCheck 00:04:44.712 12:39:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:44.712 12:39:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.712 12:39:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.712 12:39:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:44.712 12:39:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.972 12:39:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:44.972 INFO: shutting down applications... 00:04:44.972 12:39:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:44.972 12:39:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:44.972 12:39:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:44.972 12:39:24 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:45.542 Calling clear_iscsi_subsystem 00:04:45.542 Calling clear_nvmf_subsystem 00:04:45.542 Calling clear_nbd_subsystem 00:04:45.542 Calling clear_ublk_subsystem 00:04:45.542 Calling clear_vhost_blk_subsystem 00:04:45.542 Calling clear_vhost_scsi_subsystem 00:04:45.542 Calling clear_bdev_subsystem 00:04:45.542 12:39:25 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:45.542 12:39:25 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:45.542 12:39:25 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:45.542 12:39:25 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.542 12:39:25 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:45.542 12:39:25 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:45.803 12:39:25 json_config -- json_config/json_config.sh@352 -- # break 00:04:45.803 12:39:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:45.803 12:39:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:45.803 12:39:25 json_config -- json_config/common.sh@31 -- # local app=target 00:04:45.803 12:39:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.803 12:39:25 json_config -- json_config/common.sh@35 -- # [[ -n 379936 ]] 00:04:45.803 12:39:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 379936 00:04:45.803 12:39:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.803 12:39:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.803 12:39:25 json_config -- json_config/common.sh@41 -- # kill -0 379936 00:04:45.803 12:39:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.374 12:39:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.374 12:39:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.374 12:39:26 json_config -- json_config/common.sh@41 -- # kill -0 379936 00:04:46.374 12:39:26 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.374 12:39:26 json_config -- json_config/common.sh@43 -- # break 00:04:46.374 12:39:26 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.374 12:39:26 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.374 SPDK target shutdown done 00:04:46.374 12:39:26 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:46.374 INFO: relaunching applications... 00:04:46.374 12:39:26 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.374 12:39:26 json_config -- json_config/common.sh@9 -- # local app=target 00:04:46.374 12:39:26 json_config -- json_config/common.sh@10 -- # shift 00:04:46.374 12:39:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.374 12:39:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.374 12:39:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.374 12:39:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.374 12:39:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.374 12:39:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=380974 00:04:46.374 12:39:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.374 Waiting for target to run... 00:04:46.374 12:39:26 json_config -- json_config/common.sh@25 -- # waitforlisten 380974 /var/tmp/spdk_tgt.sock 00:04:46.374 12:39:26 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.374 12:39:26 json_config -- common/autotest_common.sh@835 -- # '[' -z 380974 ']' 00:04:46.374 12:39:26 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.374 12:39:26 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.374 12:39:26 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.374 12:39:26 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.374 12:39:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.374 [2024-11-25 12:39:26.107322] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:46.375 [2024-11-25 12:39:26.107400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380974 ] 00:04:46.634 [2024-11-25 12:39:26.458575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.634 [2024-11-25 12:39:26.488207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.204 [2024-11-25 12:39:27.012119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.204 [2024-11-25 12:39:27.044503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.204 12:39:27 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.204 12:39:27 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:47.204 12:39:27 json_config -- json_config/common.sh@26 -- # echo '' 00:04:47.204 00:04:47.204 12:39:27 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:47.204 12:39:27 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:47.204 INFO: Checking if target configuration is the same... 00:04:47.204 12:39:27 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:47.205 12:39:27 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.205 12:39:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.205 + '[' 2 -ne 2 ']' 00:04:47.205 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:47.205 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:47.205 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:47.205 +++ basename /dev/fd/62 00:04:47.205 ++ mktemp /tmp/62.XXX 00:04:47.465 + tmp_file_1=/tmp/62.Kvv 00:04:47.465 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.465 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:47.465 + tmp_file_2=/tmp/spdk_tgt_config.json.qIM 00:04:47.465 + ret=0 00:04:47.465 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.725 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.726 + diff -u /tmp/62.Kvv /tmp/spdk_tgt_config.json.qIM 00:04:47.726 + echo 'INFO: JSON config files are the same' 00:04:47.726 INFO: JSON config files are the same 00:04:47.726 + rm /tmp/62.Kvv /tmp/spdk_tgt_config.json.qIM 00:04:47.726 + exit 0 00:04:47.726 12:39:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:47.726 12:39:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:47.726 INFO: changing configuration and checking if this can be detected... 00:04:47.726 12:39:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:47.726 12:39:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:47.987 12:39:27 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.987 12:39:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:47.987 12:39:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.987 + '[' 2 -ne 2 ']' 00:04:47.987 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:47.987 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:47.987 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:47.987 +++ basename /dev/fd/62 00:04:47.987 ++ mktemp /tmp/62.XXX 00:04:47.987 + tmp_file_1=/tmp/62.4m0 00:04:47.987 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.987 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:47.987 + tmp_file_2=/tmp/spdk_tgt_config.json.3jP 00:04:47.987 + ret=0 00:04:47.987 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.248 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.248 + diff -u /tmp/62.4m0 /tmp/spdk_tgt_config.json.3jP 00:04:48.248 + ret=1 00:04:48.248 + echo '=== Start of file: /tmp/62.4m0 ===' 00:04:48.248 + cat /tmp/62.4m0 00:04:48.248 + echo '=== End of file: /tmp/62.4m0 ===' 00:04:48.248 + echo '' 00:04:48.248 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3jP ===' 00:04:48.248 + cat /tmp/spdk_tgt_config.json.3jP 00:04:48.248 + echo '=== End of file: /tmp/spdk_tgt_config.json.3jP ===' 00:04:48.248 + echo '' 00:04:48.248 + rm /tmp/62.4m0 /tmp/spdk_tgt_config.json.3jP 00:04:48.248 + exit 1 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:48.248 INFO: configuration change detected. 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@324 -- # [[ -n 380974 ]] 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.248 12:39:28 json_config -- json_config/json_config.sh@330 -- # killprocess 380974 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@954 -- # '[' -z 380974 ']' 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@958 -- # kill -0 380974 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@959 -- # uname 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 380974 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 380974' 00:04:48.248 killing process with pid 380974 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@973 -- # kill 380974 00:04:48.248 12:39:28 json_config -- common/autotest_common.sh@978 -- # wait 380974 00:04:48.821 12:39:28 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.821 12:39:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:48.821 12:39:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.821 12:39:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 12:39:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:48.821 12:39:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:48.821 INFO: Success 00:04:48.821 00:04:48.821 real 0m7.498s 00:04:48.821 user 0m8.999s 00:04:48.821 sys 0m2.054s 00:04:48.821 12:39:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.821 12:39:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 ************************************ 00:04:48.821 END TEST json_config 00:04:48.821 ************************************ 00:04:48.821 12:39:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:48.821 12:39:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.821 12:39:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.821 12:39:28 -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 ************************************ 00:04:48.821 START TEST json_config_extra_key 00:04:48.821 ************************************ 00:04:48.821 12:39:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:48.821 12:39:28 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.821 12:39:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.821 12:39:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.821 12:39:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.821 12:39:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:48.821 12:39:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.821 12:39:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.821 --rc genhtml_branch_coverage=1 00:04:48.821 --rc genhtml_function_coverage=1 00:04:48.821 --rc genhtml_legend=1 00:04:48.821 --rc geninfo_all_blocks=1 00:04:48.821 --rc geninfo_unexecuted_blocks=1 00:04:48.821 00:04:48.821 ' 00:04:48.821 12:39:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.821 --rc genhtml_branch_coverage=1 00:04:48.821 --rc genhtml_function_coverage=1 00:04:48.821 --rc genhtml_legend=1 00:04:48.821 --rc geninfo_all_blocks=1 00:04:48.821 --rc geninfo_unexecuted_blocks=1 00:04:48.821 00:04:48.821 ' 00:04:48.821 12:39:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.821 --rc genhtml_branch_coverage=1 00:04:48.821 --rc genhtml_function_coverage=1 00:04:48.821 --rc genhtml_legend=1 00:04:48.821 --rc geninfo_all_blocks=1 00:04:48.821 --rc geninfo_unexecuted_blocks=1 00:04:48.821 00:04:48.821 ' 00:04:48.821 12:39:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.821 --rc genhtml_branch_coverage=1 00:04:48.821 --rc genhtml_function_coverage=1 00:04:48.821 --rc genhtml_legend=1 00:04:48.821 --rc geninfo_all_blocks=1 00:04:48.821 --rc geninfo_unexecuted_blocks=1 00:04:48.821 00:04:48.821 ' 00:04:48.821 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:48.821 12:39:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:48.821 12:39:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.821 12:39:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.821 12:39:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.821 12:39:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.821 12:39:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.821 12:39:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.821 12:39:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.821 12:39:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.821 12:39:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.821 12:39:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:49.082 12:39:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:49.082 12:39:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.082 12:39:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.082 12:39:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.082 12:39:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.082 12:39:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.082 12:39:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.082 12:39:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:49.082 12:39:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:49.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:49.082 12:39:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:49.082 INFO: launching applications... 00:04:49.082 12:39:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:49.082 12:39:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:49.082 12:39:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:49.082 12:39:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.082 12:39:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.082 12:39:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.082 12:39:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.082 12:39:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.082 12:39:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=381548 00:04:49.082 12:39:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.082 Waiting for target to run... 00:04:49.082 12:39:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 381548 /var/tmp/spdk_tgt.sock 00:04:49.082 12:39:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 381548 ']' 00:04:49.082 12:39:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:49.082 12:39:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.082 12:39:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.082 12:39:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.082 12:39:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.082 12:39:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:49.082 [2024-11-25 12:39:28.802958] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:49.082 [2024-11-25 12:39:28.803029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381548 ] 00:04:49.343 [2024-11-25 12:39:29.091514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.343 [2024-11-25 12:39:29.121586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.913 12:39:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.913 12:39:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:49.913 12:39:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:49.913 00:04:49.913 12:39:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:49.913 INFO: shutting down applications... 00:04:49.913 12:39:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:49.913 12:39:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:49.913 12:39:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:49.913 12:39:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 381548 ]] 00:04:49.913 12:39:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 381548 00:04:49.913 12:39:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:49.913 12:39:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.913 12:39:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 381548 00:04:49.913 12:39:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.485 12:39:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.485 12:39:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.485 12:39:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 381548 00:04:50.485 12:39:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:50.485 12:39:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:50.485 12:39:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:50.485 12:39:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:50.485 SPDK target shutdown done 00:04:50.485 12:39:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:50.485 Success 00:04:50.485 00:04:50.485 real 0m1.567s 00:04:50.485 user 0m1.193s 00:04:50.485 sys 0m0.412s 00:04:50.485 12:39:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.485 12:39:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.485 ************************************ 00:04:50.485 END TEST json_config_extra_key 00:04:50.485 ************************************ 00:04:50.485 12:39:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.485 12:39:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.485 12:39:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.485 12:39:30 -- common/autotest_common.sh@10 -- # set +x 00:04:50.485 ************************************ 00:04:50.485 START TEST alias_rpc 00:04:50.485 ************************************ 00:04:50.485 12:39:30 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.485 * Looking for test storage... 00:04:50.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:50.485 12:39:30 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.485 12:39:30 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.485 12:39:30 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.485 12:39:30 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.485 12:39:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:50.486 12:39:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.486 12:39:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:50.486 12:39:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:50.486 12:39:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.486 12:39:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:50.486 12:39:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.486 12:39:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.486 12:39:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.486 12:39:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:50.486 12:39:30 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.486 12:39:30 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.486 --rc genhtml_branch_coverage=1 00:04:50.486 --rc genhtml_function_coverage=1 00:04:50.486 --rc genhtml_legend=1 00:04:50.486 --rc geninfo_all_blocks=1 00:04:50.486 --rc geninfo_unexecuted_blocks=1 00:04:50.486 00:04:50.486 ' 00:04:50.486 12:39:30 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.486 --rc genhtml_branch_coverage=1 00:04:50.486 --rc genhtml_function_coverage=1 00:04:50.486 --rc genhtml_legend=1 00:04:50.486 --rc geninfo_all_blocks=1 00:04:50.486 --rc geninfo_unexecuted_blocks=1 00:04:50.486 00:04:50.486 ' 00:04:50.486 12:39:30 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.486 --rc genhtml_branch_coverage=1 00:04:50.486 --rc genhtml_function_coverage=1 00:04:50.486 --rc genhtml_legend=1 00:04:50.486 --rc geninfo_all_blocks=1 00:04:50.486 --rc geninfo_unexecuted_blocks=1 00:04:50.486 00:04:50.486 ' 00:04:50.486 12:39:30 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.486 --rc genhtml_branch_coverage=1 00:04:50.486 --rc genhtml_function_coverage=1 00:04:50.486 --rc genhtml_legend=1 00:04:50.486 --rc geninfo_all_blocks=1 00:04:50.486 --rc geninfo_unexecuted_blocks=1 00:04:50.486 00:04:50.486 ' 00:04:50.486 12:39:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:50.486 12:39:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=381940 00:04:50.486 12:39:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 381940 00:04:50.486 12:39:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 381940 ']' 00:04:50.486 12:39:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.486 12:39:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.486 12:39:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.486 12:39:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.486 12:39:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.486 12:39:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.747 [2024-11-25 12:39:30.424245] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:50.747 [2024-11-25 12:39:30.424323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381940 ] 00:04:50.747 [2024-11-25 12:39:30.506889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.747 [2024-11-25 12:39:30.550581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.318 12:39:31 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.318 12:39:31 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:51.318 12:39:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:51.579 12:39:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 381940 00:04:51.579 12:39:31 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 381940 ']' 00:04:51.579 12:39:31 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 381940 00:04:51.579 12:39:31 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:51.580 12:39:31 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.580 12:39:31 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 381940 00:04:51.580 12:39:31 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.580 12:39:31 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.580 12:39:31 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 381940' 00:04:51.580 killing process with pid 381940 00:04:51.580 12:39:31 alias_rpc -- common/autotest_common.sh@973 -- # kill 381940 00:04:51.580 12:39:31 alias_rpc -- common/autotest_common.sh@978 -- # wait 381940 00:04:51.840 00:04:51.840 real 0m1.469s 00:04:51.840 user 0m1.595s 00:04:51.840 sys 0m0.396s 00:04:51.840 12:39:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.840 12:39:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.840 ************************************ 00:04:51.840 END TEST alias_rpc 00:04:51.840 ************************************ 00:04:51.840 12:39:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:51.840 12:39:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:51.840 12:39:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.840 12:39:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.840 12:39:31 -- common/autotest_common.sh@10 -- # set +x 00:04:51.840 ************************************ 00:04:51.840 START TEST spdkcli_tcp 00:04:51.840 ************************************ 00:04:51.840 12:39:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:52.101 * Looking for test storage... 00:04:52.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:52.101 12:39:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.101 12:39:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.101 12:39:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.101 12:39:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:52.101 12:39:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.102 12:39:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.102 --rc genhtml_branch_coverage=1 00:04:52.102 --rc genhtml_function_coverage=1 00:04:52.102 --rc genhtml_legend=1 00:04:52.102 --rc geninfo_all_blocks=1 00:04:52.102 --rc geninfo_unexecuted_blocks=1 00:04:52.102 00:04:52.102 ' 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.102 --rc genhtml_branch_coverage=1 00:04:52.102 --rc genhtml_function_coverage=1 00:04:52.102 --rc genhtml_legend=1 00:04:52.102 --rc geninfo_all_blocks=1 00:04:52.102 --rc geninfo_unexecuted_blocks=1 00:04:52.102 00:04:52.102 ' 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.102 --rc genhtml_branch_coverage=1 00:04:52.102 --rc genhtml_function_coverage=1 00:04:52.102 --rc genhtml_legend=1 00:04:52.102 --rc geninfo_all_blocks=1 00:04:52.102 --rc geninfo_unexecuted_blocks=1 00:04:52.102 00:04:52.102 ' 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.102 --rc genhtml_branch_coverage=1 00:04:52.102 --rc genhtml_function_coverage=1 00:04:52.102 --rc genhtml_legend=1 00:04:52.102 --rc geninfo_all_blocks=1 00:04:52.102 --rc geninfo_unexecuted_blocks=1 00:04:52.102 00:04:52.102 ' 00:04:52.102 12:39:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:52.102 12:39:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:52.102 12:39:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:52.102 12:39:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:52.102 12:39:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:52.102 12:39:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:52.102 12:39:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.102 12:39:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=382340 00:04:52.102 12:39:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 382340 00:04:52.102 12:39:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 382340 ']' 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.102 12:39:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.102 [2024-11-25 12:39:31.983084] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:52.102 [2024-11-25 12:39:31.983160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382340 ] 00:04:52.364 [2024-11-25 12:39:32.065842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.364 [2024-11-25 12:39:32.109076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.364 [2024-11-25 12:39:32.109222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.937 12:39:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.937 12:39:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:52.937 12:39:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=382546 00:04:52.937 12:39:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:52.937 12:39:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:53.198 [ 00:04:53.198 "bdev_malloc_delete", 00:04:53.198 "bdev_malloc_create", 00:04:53.198 "bdev_null_resize", 00:04:53.198 "bdev_null_delete", 00:04:53.198 "bdev_null_create", 00:04:53.198 "bdev_nvme_cuse_unregister", 00:04:53.198 "bdev_nvme_cuse_register", 00:04:53.198 "bdev_opal_new_user", 00:04:53.198 "bdev_opal_set_lock_state", 00:04:53.198 "bdev_opal_delete", 00:04:53.198 "bdev_opal_get_info", 00:04:53.198 "bdev_opal_create", 00:04:53.198 "bdev_nvme_opal_revert", 00:04:53.198 "bdev_nvme_opal_init", 00:04:53.198 "bdev_nvme_send_cmd", 00:04:53.198 "bdev_nvme_set_keys", 00:04:53.198 "bdev_nvme_get_path_iostat", 00:04:53.198 "bdev_nvme_get_mdns_discovery_info", 00:04:53.198 "bdev_nvme_stop_mdns_discovery", 00:04:53.198 "bdev_nvme_start_mdns_discovery", 00:04:53.198 "bdev_nvme_set_multipath_policy", 00:04:53.198 "bdev_nvme_set_preferred_path", 00:04:53.198 "bdev_nvme_get_io_paths", 00:04:53.198 "bdev_nvme_remove_error_injection", 00:04:53.198 "bdev_nvme_add_error_injection", 00:04:53.198 "bdev_nvme_get_discovery_info", 00:04:53.198 "bdev_nvme_stop_discovery", 00:04:53.198 "bdev_nvme_start_discovery", 00:04:53.198 "bdev_nvme_get_controller_health_info", 00:04:53.198 "bdev_nvme_disable_controller", 00:04:53.198 "bdev_nvme_enable_controller", 00:04:53.198 "bdev_nvme_reset_controller", 00:04:53.198 "bdev_nvme_get_transport_statistics", 00:04:53.198 "bdev_nvme_apply_firmware", 00:04:53.198 "bdev_nvme_detach_controller", 00:04:53.198 "bdev_nvme_get_controllers", 00:04:53.199 "bdev_nvme_attach_controller", 00:04:53.199 "bdev_nvme_set_hotplug", 00:04:53.199 "bdev_nvme_set_options", 00:04:53.199 "bdev_passthru_delete", 00:04:53.199 "bdev_passthru_create", 00:04:53.199 "bdev_lvol_set_parent_bdev", 00:04:53.199 "bdev_lvol_set_parent", 00:04:53.199 "bdev_lvol_check_shallow_copy", 00:04:53.199 "bdev_lvol_start_shallow_copy", 00:04:53.199 "bdev_lvol_grow_lvstore", 00:04:53.199 "bdev_lvol_get_lvols", 00:04:53.199 "bdev_lvol_get_lvstores", 00:04:53.199 "bdev_lvol_delete", 00:04:53.199 "bdev_lvol_set_read_only", 00:04:53.199 "bdev_lvol_resize", 00:04:53.199 "bdev_lvol_decouple_parent", 00:04:53.199 "bdev_lvol_inflate", 00:04:53.199 "bdev_lvol_rename", 00:04:53.199 "bdev_lvol_clone_bdev", 00:04:53.199 "bdev_lvol_clone", 00:04:53.199 "bdev_lvol_snapshot", 00:04:53.199 "bdev_lvol_create", 00:04:53.199 "bdev_lvol_delete_lvstore", 00:04:53.199 "bdev_lvol_rename_lvstore", 00:04:53.199 "bdev_lvol_create_lvstore", 00:04:53.199 "bdev_raid_set_options", 00:04:53.199 "bdev_raid_remove_base_bdev", 00:04:53.199 "bdev_raid_add_base_bdev", 00:04:53.199 "bdev_raid_delete", 00:04:53.199 "bdev_raid_create", 00:04:53.199 "bdev_raid_get_bdevs", 00:04:53.199 "bdev_error_inject_error", 00:04:53.199 "bdev_error_delete", 00:04:53.199 "bdev_error_create", 00:04:53.199 "bdev_split_delete", 00:04:53.199 "bdev_split_create", 00:04:53.199 "bdev_delay_delete", 00:04:53.199 "bdev_delay_create", 00:04:53.199 "bdev_delay_update_latency", 00:04:53.199 "bdev_zone_block_delete", 00:04:53.199 "bdev_zone_block_create", 00:04:53.199 "blobfs_create", 00:04:53.199 "blobfs_detect", 00:04:53.199 "blobfs_set_cache_size", 00:04:53.199 "bdev_aio_delete", 00:04:53.199 "bdev_aio_rescan", 00:04:53.199 "bdev_aio_create", 00:04:53.199 "bdev_ftl_set_property", 00:04:53.199 "bdev_ftl_get_properties", 00:04:53.199 "bdev_ftl_get_stats", 00:04:53.199 "bdev_ftl_unmap", 00:04:53.199 "bdev_ftl_unload", 00:04:53.199 "bdev_ftl_delete", 00:04:53.199 "bdev_ftl_load", 00:04:53.199 "bdev_ftl_create", 00:04:53.199 "bdev_virtio_attach_controller", 00:04:53.199 "bdev_virtio_scsi_get_devices", 00:04:53.199 "bdev_virtio_detach_controller", 00:04:53.199 "bdev_virtio_blk_set_hotplug", 00:04:53.199 "bdev_iscsi_delete", 00:04:53.199 "bdev_iscsi_create", 00:04:53.199 "bdev_iscsi_set_options", 00:04:53.199 "accel_error_inject_error", 00:04:53.199 "ioat_scan_accel_module", 00:04:53.199 "dsa_scan_accel_module", 00:04:53.199 "iaa_scan_accel_module", 00:04:53.199 "vfu_virtio_create_fs_endpoint", 00:04:53.199 "vfu_virtio_create_scsi_endpoint", 00:04:53.199 "vfu_virtio_scsi_remove_target", 00:04:53.199 "vfu_virtio_scsi_add_target", 00:04:53.199 "vfu_virtio_create_blk_endpoint", 00:04:53.199 "vfu_virtio_delete_endpoint", 00:04:53.199 "keyring_file_remove_key", 00:04:53.199 "keyring_file_add_key", 00:04:53.199 "keyring_linux_set_options", 00:04:53.199 "fsdev_aio_delete", 00:04:53.199 "fsdev_aio_create", 00:04:53.199 "iscsi_get_histogram", 00:04:53.199 "iscsi_enable_histogram", 00:04:53.199 "iscsi_set_options", 00:04:53.199 "iscsi_get_auth_groups", 00:04:53.199 "iscsi_auth_group_remove_secret", 00:04:53.199 "iscsi_auth_group_add_secret", 00:04:53.199 "iscsi_delete_auth_group", 00:04:53.199 "iscsi_create_auth_group", 00:04:53.199 "iscsi_set_discovery_auth", 00:04:53.199 "iscsi_get_options", 00:04:53.199 "iscsi_target_node_request_logout", 00:04:53.199 "iscsi_target_node_set_redirect", 00:04:53.199 "iscsi_target_node_set_auth", 00:04:53.199 "iscsi_target_node_add_lun", 00:04:53.199 "iscsi_get_stats", 00:04:53.199 "iscsi_get_connections", 00:04:53.199 "iscsi_portal_group_set_auth", 00:04:53.199 "iscsi_start_portal_group", 00:04:53.199 "iscsi_delete_portal_group", 00:04:53.199 "iscsi_create_portal_group", 00:04:53.199 "iscsi_get_portal_groups", 00:04:53.199 "iscsi_delete_target_node", 00:04:53.199 "iscsi_target_node_remove_pg_ig_maps", 00:04:53.199 "iscsi_target_node_add_pg_ig_maps", 00:04:53.199 "iscsi_create_target_node", 00:04:53.199 "iscsi_get_target_nodes", 00:04:53.199 "iscsi_delete_initiator_group", 00:04:53.199 "iscsi_initiator_group_remove_initiators", 00:04:53.199 "iscsi_initiator_group_add_initiators", 00:04:53.199 "iscsi_create_initiator_group", 00:04:53.199 "iscsi_get_initiator_groups", 00:04:53.199 "nvmf_set_crdt", 00:04:53.199 "nvmf_set_config", 00:04:53.199 "nvmf_set_max_subsystems", 00:04:53.199 "nvmf_stop_mdns_prr", 00:04:53.199 "nvmf_publish_mdns_prr", 00:04:53.199 "nvmf_subsystem_get_listeners", 00:04:53.199 "nvmf_subsystem_get_qpairs", 00:04:53.199 "nvmf_subsystem_get_controllers", 00:04:53.199 "nvmf_get_stats", 00:04:53.199 "nvmf_get_transports", 00:04:53.199 "nvmf_create_transport", 00:04:53.199 "nvmf_get_targets", 00:04:53.199 "nvmf_delete_target", 00:04:53.199 "nvmf_create_target", 00:04:53.199 "nvmf_subsystem_allow_any_host", 00:04:53.199 "nvmf_subsystem_set_keys", 00:04:53.199 "nvmf_subsystem_remove_host", 00:04:53.199 "nvmf_subsystem_add_host", 00:04:53.199 "nvmf_ns_remove_host", 00:04:53.199 "nvmf_ns_add_host", 00:04:53.199 "nvmf_subsystem_remove_ns", 00:04:53.199 "nvmf_subsystem_set_ns_ana_group", 00:04:53.199 "nvmf_subsystem_add_ns", 00:04:53.199 "nvmf_subsystem_listener_set_ana_state", 00:04:53.199 "nvmf_discovery_get_referrals", 00:04:53.199 "nvmf_discovery_remove_referral", 00:04:53.199 "nvmf_discovery_add_referral", 00:04:53.199 "nvmf_subsystem_remove_listener", 00:04:53.199 "nvmf_subsystem_add_listener", 00:04:53.199 "nvmf_delete_subsystem", 00:04:53.199 "nvmf_create_subsystem", 00:04:53.199 "nvmf_get_subsystems", 00:04:53.199 "env_dpdk_get_mem_stats", 00:04:53.199 "nbd_get_disks", 00:04:53.199 "nbd_stop_disk", 00:04:53.199 "nbd_start_disk", 00:04:53.199 "ublk_recover_disk", 00:04:53.199 "ublk_get_disks", 00:04:53.199 "ublk_stop_disk", 00:04:53.199 "ublk_start_disk", 00:04:53.199 "ublk_destroy_target", 00:04:53.199 "ublk_create_target", 00:04:53.199 "virtio_blk_create_transport", 00:04:53.199 "virtio_blk_get_transports", 00:04:53.199 "vhost_controller_set_coalescing", 00:04:53.199 "vhost_get_controllers", 00:04:53.199 "vhost_delete_controller", 00:04:53.199 "vhost_create_blk_controller", 00:04:53.199 "vhost_scsi_controller_remove_target", 00:04:53.199 "vhost_scsi_controller_add_target", 00:04:53.199 "vhost_start_scsi_controller", 00:04:53.199 "vhost_create_scsi_controller", 00:04:53.199 "thread_set_cpumask", 00:04:53.199 "scheduler_set_options", 00:04:53.199 "framework_get_governor", 00:04:53.199 "framework_get_scheduler", 00:04:53.199 "framework_set_scheduler", 00:04:53.199 "framework_get_reactors", 00:04:53.199 "thread_get_io_channels", 00:04:53.199 "thread_get_pollers", 00:04:53.199 "thread_get_stats", 00:04:53.199 "framework_monitor_context_switch", 00:04:53.199 "spdk_kill_instance", 00:04:53.199 "log_enable_timestamps", 00:04:53.199 "log_get_flags", 00:04:53.199 "log_clear_flag", 00:04:53.199 "log_set_flag", 00:04:53.199 "log_get_level", 00:04:53.199 "log_set_level", 00:04:53.199 "log_get_print_level", 00:04:53.199 "log_set_print_level", 00:04:53.199 "framework_enable_cpumask_locks", 00:04:53.199 "framework_disable_cpumask_locks", 00:04:53.199 "framework_wait_init", 00:04:53.199 "framework_start_init", 00:04:53.199 "scsi_get_devices", 00:04:53.199 "bdev_get_histogram", 00:04:53.199 "bdev_enable_histogram", 00:04:53.199 "bdev_set_qos_limit", 00:04:53.199 "bdev_set_qd_sampling_period", 00:04:53.199 "bdev_get_bdevs", 00:04:53.199 "bdev_reset_iostat", 00:04:53.199 "bdev_get_iostat", 00:04:53.199 "bdev_examine", 00:04:53.199 "bdev_wait_for_examine", 00:04:53.199 "bdev_set_options", 00:04:53.199 "accel_get_stats", 00:04:53.199 "accel_set_options", 00:04:53.199 "accel_set_driver", 00:04:53.199 "accel_crypto_key_destroy", 00:04:53.199 "accel_crypto_keys_get", 00:04:53.199 "accel_crypto_key_create", 00:04:53.199 "accel_assign_opc", 00:04:53.199 "accel_get_module_info", 00:04:53.199 "accel_get_opc_assignments", 00:04:53.199 "vmd_rescan", 00:04:53.199 "vmd_remove_device", 00:04:53.199 "vmd_enable", 00:04:53.199 "sock_get_default_impl", 00:04:53.199 "sock_set_default_impl", 00:04:53.199 "sock_impl_set_options", 00:04:53.199 "sock_impl_get_options", 00:04:53.199 "iobuf_get_stats", 00:04:53.199 "iobuf_set_options", 00:04:53.199 "keyring_get_keys", 00:04:53.199 "vfu_tgt_set_base_path", 00:04:53.199 "framework_get_pci_devices", 00:04:53.199 "framework_get_config", 00:04:53.199 "framework_get_subsystems", 00:04:53.199 "fsdev_set_opts", 00:04:53.199 "fsdev_get_opts", 00:04:53.199 "trace_get_info", 00:04:53.199 "trace_get_tpoint_group_mask", 00:04:53.199 "trace_disable_tpoint_group", 00:04:53.199 "trace_enable_tpoint_group", 00:04:53.199 "trace_clear_tpoint_mask", 00:04:53.199 "trace_set_tpoint_mask", 00:04:53.199 "notify_get_notifications", 00:04:53.199 "notify_get_types", 00:04:53.199 "spdk_get_version", 00:04:53.199 "rpc_get_methods" 00:04:53.199 ] 00:04:53.199 12:39:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:53.199 12:39:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.199 12:39:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.199 12:39:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:53.199 12:39:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 382340 00:04:53.199 12:39:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 382340 ']' 00:04:53.199 12:39:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 382340 00:04:53.199 12:39:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:53.200 12:39:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.200 12:39:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382340 00:04:53.200 12:39:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.200 12:39:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.200 12:39:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382340' 00:04:53.200 killing process with pid 382340 00:04:53.200 12:39:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 382340 00:04:53.200 12:39:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 382340 00:04:53.460 00:04:53.460 real 0m1.533s 00:04:53.460 user 0m2.785s 00:04:53.460 sys 0m0.454s 00:04:53.460 12:39:33 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.460 12:39:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.460 ************************************ 00:04:53.460 END TEST spdkcli_tcp 00:04:53.460 ************************************ 00:04:53.460 12:39:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.460 12:39:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.460 12:39:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.460 12:39:33 -- common/autotest_common.sh@10 -- # set +x 00:04:53.460 ************************************ 00:04:53.460 START TEST dpdk_mem_utility 00:04:53.460 ************************************ 00:04:53.460 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.722 * Looking for test storage... 00:04:53.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.722 12:39:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.722 --rc genhtml_branch_coverage=1 00:04:53.722 --rc genhtml_function_coverage=1 00:04:53.722 --rc genhtml_legend=1 00:04:53.722 --rc geninfo_all_blocks=1 00:04:53.722 --rc geninfo_unexecuted_blocks=1 00:04:53.722 00:04:53.722 ' 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.722 --rc genhtml_branch_coverage=1 00:04:53.722 --rc genhtml_function_coverage=1 00:04:53.722 --rc genhtml_legend=1 00:04:53.722 --rc geninfo_all_blocks=1 00:04:53.722 --rc geninfo_unexecuted_blocks=1 00:04:53.722 00:04:53.722 ' 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.722 --rc genhtml_branch_coverage=1 00:04:53.722 --rc genhtml_function_coverage=1 00:04:53.722 --rc genhtml_legend=1 00:04:53.722 --rc geninfo_all_blocks=1 00:04:53.722 --rc geninfo_unexecuted_blocks=1 00:04:53.722 00:04:53.722 ' 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.722 --rc genhtml_branch_coverage=1 00:04:53.722 --rc genhtml_function_coverage=1 00:04:53.722 --rc genhtml_legend=1 00:04:53.722 --rc geninfo_all_blocks=1 00:04:53.722 --rc geninfo_unexecuted_blocks=1 00:04:53.722 00:04:53.722 ' 00:04:53.722 12:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:53.722 12:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.722 12:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=382752 00:04:53.722 12:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 382752 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 382752 ']' 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.722 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.722 [2024-11-25 12:39:33.560194] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:53.722 [2024-11-25 12:39:33.560247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382752 ] 00:04:53.983 [2024-11-25 12:39:33.636098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.983 [2024-11-25 12:39:33.672326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.983 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.983 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:53.983 12:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:53.983 12:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:53.983 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.983 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.983 { 00:04:53.983 "filename": "/tmp/spdk_mem_dump.txt" 00:04:53.983 } 00:04:53.983 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.983 12:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:54.248 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:54.248 1 heaps totaling size 810.000000 MiB 00:04:54.248 size: 810.000000 MiB heap id: 0 00:04:54.248 end heaps---------- 00:04:54.248 9 mempools totaling size 595.772034 MiB 00:04:54.248 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:54.248 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:54.248 size: 92.545471 MiB name: bdev_io_382752 00:04:54.248 size: 50.003479 MiB name: msgpool_382752 00:04:54.248 size: 36.509338 MiB name: fsdev_io_382752 00:04:54.248 size: 21.763794 MiB name: PDU_Pool 00:04:54.248 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:54.248 size: 4.133484 MiB name: evtpool_382752 00:04:54.248 size: 0.026123 MiB name: Session_Pool 00:04:54.248 end mempools------- 00:04:54.248 6 memzones totaling size 4.142822 MiB 00:04:54.248 size: 1.000366 MiB name: RG_ring_0_382752 00:04:54.248 size: 1.000366 MiB name: RG_ring_1_382752 00:04:54.248 size: 1.000366 MiB name: RG_ring_4_382752 00:04:54.248 size: 1.000366 MiB name: RG_ring_5_382752 00:04:54.248 size: 0.125366 MiB name: RG_ring_2_382752 00:04:54.248 size: 0.015991 MiB name: RG_ring_3_382752 00:04:54.248 end memzones------- 00:04:54.248 12:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:54.248 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:54.248 list of free elements. size: 10.862488 MiB 00:04:54.248 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:54.248 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:54.248 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:54.248 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:54.248 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:54.248 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:54.248 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:54.248 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:54.248 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:54.248 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:54.248 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:54.248 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:54.248 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:54.248 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:54.248 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:54.248 list of standard malloc elements. size: 199.218628 MiB 00:04:54.248 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:54.248 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:54.248 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:54.248 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:54.248 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:54.248 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:54.248 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:54.248 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:54.248 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:54.248 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:54.248 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:54.248 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:54.248 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:54.248 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:54.248 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:54.248 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:54.248 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:54.248 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:54.248 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:54.248 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:54.248 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:54.248 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:54.248 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:54.248 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:54.248 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:54.248 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:54.248 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:54.248 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:54.248 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:54.248 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:54.248 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:54.248 list of memzone associated elements. size: 599.918884 MiB 00:04:54.248 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:54.248 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:54.248 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:54.248 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:54.248 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:54.248 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_382752_0 00:04:54.248 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:54.248 associated memzone info: size: 48.002930 MiB name: MP_msgpool_382752_0 00:04:54.248 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:54.248 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_382752_0 00:04:54.248 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:54.248 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:54.248 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:54.248 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:54.248 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:54.248 associated memzone info: size: 3.000122 MiB name: MP_evtpool_382752_0 00:04:54.248 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:54.248 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_382752 00:04:54.248 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:54.248 associated memzone info: size: 1.007996 MiB name: MP_evtpool_382752 00:04:54.248 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:54.248 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:54.248 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:54.248 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:54.248 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:54.248 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:54.248 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:54.248 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:54.248 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:54.248 associated memzone info: size: 1.000366 MiB name: RG_ring_0_382752 00:04:54.248 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:54.248 associated memzone info: size: 1.000366 MiB name: RG_ring_1_382752 00:04:54.248 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:54.248 associated memzone info: size: 1.000366 MiB name: RG_ring_4_382752 00:04:54.248 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:54.248 associated memzone info: size: 1.000366 MiB name: RG_ring_5_382752 00:04:54.248 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:54.248 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_382752 00:04:54.248 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:54.248 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_382752 00:04:54.248 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:54.248 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:54.248 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:54.248 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:54.248 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:54.248 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:54.248 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:54.248 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_382752 00:04:54.248 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:54.248 associated memzone info: size: 0.125366 MiB name: RG_ring_2_382752 00:04:54.248 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:54.248 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:54.248 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:54.248 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:54.248 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:54.248 associated memzone info: size: 0.015991 MiB name: RG_ring_3_382752 00:04:54.248 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:54.248 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:54.248 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:54.248 associated memzone info: size: 0.000183 MiB name: MP_msgpool_382752 00:04:54.248 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:54.248 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_382752 00:04:54.248 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:54.248 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_382752 00:04:54.248 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:54.248 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:54.248 12:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:54.248 12:39:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 382752 00:04:54.248 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 382752 ']' 00:04:54.248 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 382752 00:04:54.248 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:54.248 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.248 12:39:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382752 00:04:54.248 12:39:34 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.248 12:39:34 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.248 12:39:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382752' 00:04:54.248 killing process with pid 382752 00:04:54.248 12:39:34 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 382752 00:04:54.248 12:39:34 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 382752 00:04:54.509 00:04:54.509 real 0m0.915s 00:04:54.509 user 0m0.889s 00:04:54.509 sys 0m0.368s 00:04:54.509 12:39:34 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.509 12:39:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.509 ************************************ 00:04:54.509 END TEST dpdk_mem_utility 00:04:54.509 ************************************ 00:04:54.509 12:39:34 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:54.509 12:39:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.509 12:39:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.509 12:39:34 -- common/autotest_common.sh@10 -- # set +x 00:04:54.509 ************************************ 00:04:54.509 START TEST event 00:04:54.509 ************************************ 00:04:54.509 12:39:34 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:54.509 * Looking for test storage... 00:04:54.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:54.770 12:39:34 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.770 12:39:34 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.770 12:39:34 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.770 12:39:34 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.770 12:39:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.770 12:39:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.770 12:39:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.770 12:39:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.770 12:39:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.770 12:39:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.770 12:39:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.770 12:39:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.770 12:39:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.770 12:39:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.770 12:39:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.770 12:39:34 event -- scripts/common.sh@344 -- # case "$op" in 00:04:54.770 12:39:34 event -- scripts/common.sh@345 -- # : 1 00:04:54.770 12:39:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.770 12:39:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.770 12:39:34 event -- scripts/common.sh@365 -- # decimal 1 00:04:54.770 12:39:34 event -- scripts/common.sh@353 -- # local d=1 00:04:54.770 12:39:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.770 12:39:34 event -- scripts/common.sh@355 -- # echo 1 00:04:54.770 12:39:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.770 12:39:34 event -- scripts/common.sh@366 -- # decimal 2 00:04:54.770 12:39:34 event -- scripts/common.sh@353 -- # local d=2 00:04:54.770 12:39:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.770 12:39:34 event -- scripts/common.sh@355 -- # echo 2 00:04:54.770 12:39:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.770 12:39:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.770 12:39:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.770 12:39:34 event -- scripts/common.sh@368 -- # return 0 00:04:54.770 12:39:34 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.770 12:39:34 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.770 --rc genhtml_branch_coverage=1 00:04:54.770 --rc genhtml_function_coverage=1 00:04:54.770 --rc genhtml_legend=1 00:04:54.770 --rc geninfo_all_blocks=1 00:04:54.770 --rc geninfo_unexecuted_blocks=1 00:04:54.770 00:04:54.770 ' 00:04:54.770 12:39:34 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.770 --rc genhtml_branch_coverage=1 00:04:54.770 --rc genhtml_function_coverage=1 00:04:54.770 --rc genhtml_legend=1 00:04:54.770 --rc geninfo_all_blocks=1 00:04:54.770 --rc geninfo_unexecuted_blocks=1 00:04:54.770 00:04:54.770 ' 00:04:54.770 12:39:34 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.770 --rc genhtml_branch_coverage=1 00:04:54.770 --rc genhtml_function_coverage=1 00:04:54.770 --rc genhtml_legend=1 00:04:54.770 --rc geninfo_all_blocks=1 00:04:54.770 --rc geninfo_unexecuted_blocks=1 00:04:54.770 00:04:54.770 ' 00:04:54.770 12:39:34 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.770 --rc genhtml_branch_coverage=1 00:04:54.770 --rc genhtml_function_coverage=1 00:04:54.770 --rc genhtml_legend=1 00:04:54.770 --rc geninfo_all_blocks=1 00:04:54.770 --rc geninfo_unexecuted_blocks=1 00:04:54.770 00:04:54.770 ' 00:04:54.770 12:39:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:54.770 12:39:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:54.770 12:39:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:54.770 12:39:34 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:54.770 12:39:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.770 12:39:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.770 ************************************ 00:04:54.770 START TEST event_perf 00:04:54.770 ************************************ 00:04:54.770 12:39:34 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:54.770 Running I/O for 1 seconds...[2024-11-25 12:39:34.585714] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:54.770 [2024-11-25 12:39:34.585811] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383064 ] 00:04:55.031 [2024-11-25 12:39:34.674644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.031 [2024-11-25 12:39:34.720985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.031 [2024-11-25 12:39:34.721104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.031 [2024-11-25 12:39:34.721261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.031 [2024-11-25 12:39:34.721261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.973 Running I/O for 1 seconds... 00:04:55.973 lcore 0: 180843 00:04:55.973 lcore 1: 180839 00:04:55.973 lcore 2: 180838 00:04:55.973 lcore 3: 180841 00:04:55.973 done. 00:04:55.973 00:04:55.973 real 0m1.191s 00:04:55.973 user 0m4.105s 00:04:55.973 sys 0m0.085s 00:04:55.973 12:39:35 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.973 12:39:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.973 ************************************ 00:04:55.973 END TEST event_perf 00:04:55.973 ************************************ 00:04:55.973 12:39:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:55.973 12:39:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:55.973 12:39:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.973 12:39:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.973 ************************************ 00:04:55.973 START TEST event_reactor 00:04:55.973 ************************************ 00:04:55.973 12:39:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:55.973 [2024-11-25 12:39:35.856939] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:55.973 [2024-11-25 12:39:35.857034] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383203 ] 00:04:56.234 [2024-11-25 12:39:35.939818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.235 [2024-11-25 12:39:35.974853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.178 test_start 00:04:57.178 oneshot 00:04:57.178 tick 100 00:04:57.178 tick 100 00:04:57.178 tick 250 00:04:57.178 tick 100 00:04:57.178 tick 100 00:04:57.178 tick 250 00:04:57.178 tick 100 00:04:57.178 tick 500 00:04:57.178 tick 100 00:04:57.178 tick 100 00:04:57.178 tick 250 00:04:57.178 tick 100 00:04:57.178 tick 100 00:04:57.178 test_end 00:04:57.178 00:04:57.178 real 0m1.171s 00:04:57.178 user 0m1.091s 00:04:57.178 sys 0m0.076s 00:04:57.178 12:39:37 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.178 12:39:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:57.178 ************************************ 00:04:57.178 END TEST event_reactor 00:04:57.178 ************************************ 00:04:57.178 12:39:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:57.178 12:39:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:57.178 12:39:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.178 12:39:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.439 ************************************ 00:04:57.439 START TEST event_reactor_perf 00:04:57.439 ************************************ 00:04:57.439 12:39:37 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:57.439 [2024-11-25 12:39:37.105041] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:57.439 [2024-11-25 12:39:37.105121] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383538 ] 00:04:57.439 [2024-11-25 12:39:37.186588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.439 [2024-11-25 12:39:37.220694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.382 test_start 00:04:58.382 test_end 00:04:58.382 Performance: 369540 events per second 00:04:58.382 00:04:58.382 real 0m1.168s 00:04:58.382 user 0m1.092s 00:04:58.382 sys 0m0.072s 00:04:58.382 12:39:38 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.382 12:39:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.382 ************************************ 00:04:58.382 END TEST event_reactor_perf 00:04:58.382 ************************************ 00:04:58.644 12:39:38 event -- event/event.sh@49 -- # uname -s 00:04:58.644 12:39:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:58.644 12:39:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:58.644 12:39:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.644 12:39:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.644 12:39:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.644 ************************************ 00:04:58.644 START TEST event_scheduler 00:04:58.644 ************************************ 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:58.644 * Looking for test storage... 00:04:58.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.644 12:39:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.644 --rc genhtml_branch_coverage=1 00:04:58.644 --rc genhtml_function_coverage=1 00:04:58.644 --rc genhtml_legend=1 00:04:58.644 --rc geninfo_all_blocks=1 00:04:58.644 --rc geninfo_unexecuted_blocks=1 00:04:58.644 00:04:58.644 ' 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.644 --rc genhtml_branch_coverage=1 00:04:58.644 --rc genhtml_function_coverage=1 00:04:58.644 --rc genhtml_legend=1 00:04:58.644 --rc geninfo_all_blocks=1 00:04:58.644 --rc geninfo_unexecuted_blocks=1 00:04:58.644 00:04:58.644 ' 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.644 --rc genhtml_branch_coverage=1 00:04:58.644 --rc genhtml_function_coverage=1 00:04:58.644 --rc genhtml_legend=1 00:04:58.644 --rc geninfo_all_blocks=1 00:04:58.644 --rc geninfo_unexecuted_blocks=1 00:04:58.644 00:04:58.644 ' 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.644 --rc genhtml_branch_coverage=1 00:04:58.644 --rc genhtml_function_coverage=1 00:04:58.644 --rc genhtml_legend=1 00:04:58.644 --rc geninfo_all_blocks=1 00:04:58.644 --rc geninfo_unexecuted_blocks=1 00:04:58.644 00:04:58.644 ' 00:04:58.644 12:39:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:58.644 12:39:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=383920 00:04:58.644 12:39:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.644 12:39:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:58.644 12:39:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 383920 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 383920 ']' 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.644 12:39:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.905 [2024-11-25 12:39:38.598303] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:04:58.905 [2024-11-25 12:39:38.598370] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383920 ] 00:04:58.905 [2024-11-25 12:39:38.663988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.905 [2024-11-25 12:39:38.694893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.905 [2024-11-25 12:39:38.694950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.905 [2024-11-25 12:39:38.695107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.905 [2024-11-25 12:39:38.695108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.905 12:39:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.905 12:39:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:58.905 12:39:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:58.905 12:39:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.905 12:39:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.905 [2024-11-25 12:39:38.735800] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:58.905 [2024-11-25 12:39:38.735814] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:58.905 [2024-11-25 12:39:38.735821] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:58.905 [2024-11-25 12:39:38.735826] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:58.905 [2024-11-25 12:39:38.735830] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:58.905 12:39:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.905 12:39:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:58.905 12:39:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.905 12:39:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.905 [2024-11-25 12:39:38.795925] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:58.905 12:39:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.905 12:39:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:58.905 12:39:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.905 12:39:38 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.905 12:39:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.166 ************************************ 00:04:59.166 START TEST scheduler_create_thread 00:04:59.166 ************************************ 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.166 2 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.166 3 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.166 4 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.166 5 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.166 6 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.166 7 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.166 8 00:04:59.166 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.167 12:39:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:59.167 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.167 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.167 9 00:04:59.167 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.167 12:39:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:59.167 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.167 12:39:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.739 10 00:04:59.739 12:39:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.739 12:39:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:59.739 12:39:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.739 12:39:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.126 12:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.126 12:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.126 12:39:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.126 12:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.126 12:39:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.700 12:39:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.700 12:39:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:01.700 12:39:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.700 12:39:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.644 12:39:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.644 12:39:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:02.644 12:39:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:02.644 12:39:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.644 12:39:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.216 12:39:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.216 00:05:03.216 real 0m4.226s 00:05:03.216 user 0m0.023s 00:05:03.216 sys 0m0.008s 00:05:03.216 12:39:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.216 12:39:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.216 ************************************ 00:05:03.216 END TEST scheduler_create_thread 00:05:03.216 ************************************ 00:05:03.216 12:39:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:03.216 12:39:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 383920 00:05:03.216 12:39:43 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 383920 ']' 00:05:03.216 12:39:43 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 383920 00:05:03.216 12:39:43 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:03.216 12:39:43 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.217 12:39:43 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383920 00:05:03.478 12:39:43 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:03.478 12:39:43 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:03.479 12:39:43 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383920' 00:05:03.479 killing process with pid 383920 00:05:03.479 12:39:43 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 383920 00:05:03.479 12:39:43 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 383920 00:05:03.739 [2024-11-25 12:39:43.437393] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:03.739 00:05:03.739 real 0m5.257s 00:05:03.739 user 0m11.142s 00:05:03.739 sys 0m0.408s 00:05:03.739 12:39:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.739 12:39:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.739 ************************************ 00:05:03.739 END TEST event_scheduler 00:05:03.739 ************************************ 00:05:03.739 12:39:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:04.001 12:39:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:04.001 12:39:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.001 12:39:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.001 12:39:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.001 ************************************ 00:05:04.001 START TEST app_repeat 00:05:04.001 ************************************ 00:05:04.001 12:39:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=384981 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 384981' 00:05:04.001 Process app_repeat pid: 384981 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:04.001 spdk_app_start Round 0 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 384981 /var/tmp/spdk-nbd.sock 00:05:04.001 12:39:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 384981 ']' 00:05:04.001 12:39:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.001 12:39:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.001 12:39:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.001 12:39:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.001 12:39:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.001 [2024-11-25 12:39:43.703509] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:04.001 [2024-11-25 12:39:43.703559] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384981 ] 00:05:04.001 [2024-11-25 12:39:43.779600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.001 [2024-11-25 12:39:43.815899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.001 [2024-11-25 12:39:43.815918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.001 12:39:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.001 12:39:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:04.001 12:39:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.263 Malloc0 00:05:04.264 12:39:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.524 Malloc1 00:05:04.524 12:39:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.524 12:39:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.785 /dev/nbd0 00:05:04.785 12:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.785 12:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.785 1+0 records in 00:05:04.785 1+0 records out 00:05:04.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115788 s, 3.5 MB/s 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.785 12:39:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.785 12:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.785 12:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.785 12:39:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.785 /dev/nbd1 00:05:04.785 12:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.046 12:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.046 1+0 records in 00:05:05.046 1+0 records out 00:05:05.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252559 s, 16.2 MB/s 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:05.046 12:39:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:05.046 12:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.046 12:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.046 12:39:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.046 12:39:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.046 12:39:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.046 12:39:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.046 { 00:05:05.046 "nbd_device": "/dev/nbd0", 00:05:05.046 "bdev_name": "Malloc0" 00:05:05.046 }, 00:05:05.046 { 00:05:05.046 "nbd_device": "/dev/nbd1", 00:05:05.046 "bdev_name": "Malloc1" 00:05:05.046 } 00:05:05.046 ]' 00:05:05.046 12:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.046 { 00:05:05.046 "nbd_device": "/dev/nbd0", 00:05:05.046 "bdev_name": "Malloc0" 00:05:05.046 }, 00:05:05.046 { 00:05:05.046 "nbd_device": "/dev/nbd1", 00:05:05.046 "bdev_name": "Malloc1" 00:05:05.046 } 00:05:05.046 ]' 00:05:05.046 12:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.046 12:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.046 /dev/nbd1' 00:05:05.046 12:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.046 /dev/nbd1' 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.047 12:39:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.308 256+0 records in 00:05:05.308 256+0 records out 00:05:05.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118483 s, 88.5 MB/s 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.308 256+0 records in 00:05:05.308 256+0 records out 00:05:05.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166723 s, 62.9 MB/s 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.308 256+0 records in 00:05:05.308 256+0 records out 00:05:05.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200055 s, 52.4 MB/s 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.308 12:39:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.308 12:39:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.308 12:39:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.308 12:39:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.308 12:39:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.308 12:39:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.308 12:39:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.308 12:39:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.308 12:39:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.308 12:39:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.309 12:39:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.309 12:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.309 12:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.309 12:39:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.309 12:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.309 12:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.309 12:39:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.309 12:39:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.309 12:39:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.309 12:39:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.309 12:39:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.569 12:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.569 12:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.569 12:39:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.569 12:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.569 12:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.569 12:39:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.569 12:39:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.569 12:39:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.569 12:39:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.569 12:39:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.569 12:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.829 12:39:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.829 12:39:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.090 12:39:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.090 [2024-11-25 12:39:45.919087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.090 [2024-11-25 12:39:45.955298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.090 [2024-11-25 12:39:45.955300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.090 [2024-11-25 12:39:45.987213] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.090 [2024-11-25 12:39:45.987248] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.407 12:39:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.407 12:39:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:09.407 spdk_app_start Round 1 00:05:09.407 12:39:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 384981 /var/tmp/spdk-nbd.sock 00:05:09.407 12:39:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 384981 ']' 00:05:09.407 12:39:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.407 12:39:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.407 12:39:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.407 12:39:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.407 12:39:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.407 12:39:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.407 12:39:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.407 12:39:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.407 Malloc0 00:05:09.407 12:39:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.407 Malloc1 00:05:09.674 12:39:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.674 /dev/nbd0 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.674 1+0 records in 00:05:09.674 1+0 records out 00:05:09.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197083 s, 20.8 MB/s 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.674 12:39:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.674 12:39:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.935 /dev/nbd1 00:05:09.935 12:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.935 12:39:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.935 1+0 records in 00:05:09.935 1+0 records out 00:05:09.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277788 s, 14.7 MB/s 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.935 12:39:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.935 12:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.935 12:39:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.935 12:39:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.935 12:39:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.935 12:39:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.196 12:39:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.196 { 00:05:10.196 "nbd_device": "/dev/nbd0", 00:05:10.196 "bdev_name": "Malloc0" 00:05:10.196 }, 00:05:10.196 { 00:05:10.196 "nbd_device": "/dev/nbd1", 00:05:10.196 "bdev_name": "Malloc1" 00:05:10.196 } 00:05:10.196 ]' 00:05:10.196 12:39:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.196 { 00:05:10.196 "nbd_device": "/dev/nbd0", 00:05:10.196 "bdev_name": "Malloc0" 00:05:10.196 }, 00:05:10.196 { 00:05:10.196 "nbd_device": "/dev/nbd1", 00:05:10.196 "bdev_name": "Malloc1" 00:05:10.196 } 00:05:10.196 ]' 00:05:10.196 12:39:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.196 12:39:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.196 /dev/nbd1' 00:05:10.196 12:39:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.196 /dev/nbd1' 00:05:10.196 12:39:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.196 256+0 records in 00:05:10.196 256+0 records out 00:05:10.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127587 s, 82.2 MB/s 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.196 256+0 records in 00:05:10.196 256+0 records out 00:05:10.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198171 s, 52.9 MB/s 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.196 256+0 records in 00:05:10.196 256+0 records out 00:05:10.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179208 s, 58.5 MB/s 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.196 12:39:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.197 12:39:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.197 12:39:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.197 12:39:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.197 12:39:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.197 12:39:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.197 12:39:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.197 12:39:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.197 12:39:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.197 12:39:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.458 12:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.458 12:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.459 12:39:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.459 12:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.459 12:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.459 12:39:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.459 12:39:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.459 12:39:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.459 12:39:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.459 12:39:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.721 12:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.721 12:39:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.721 12:39:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.721 12:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.721 12:39:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.721 12:39:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.721 12:39:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.721 12:39:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.721 12:39:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.721 12:39:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.721 12:39:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.982 12:39:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.982 12:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.982 12:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.982 12:39:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.983 12:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.983 12:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.983 12:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.983 12:39:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.983 12:39:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.983 12:39:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.983 12:39:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.983 12:39:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.983 12:39:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.983 12:39:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.244 [2024-11-25 12:39:50.990759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.244 [2024-11-25 12:39:51.026701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.244 [2024-11-25 12:39:51.026702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.244 [2024-11-25 12:39:51.059403] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.244 [2024-11-25 12:39:51.059435] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.548 12:39:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.548 12:39:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:14.548 spdk_app_start Round 2 00:05:14.548 12:39:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 384981 /var/tmp/spdk-nbd.sock 00:05:14.548 12:39:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 384981 ']' 00:05:14.548 12:39:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.548 12:39:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.548 12:39:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.548 12:39:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.548 12:39:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.548 12:39:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.548 12:39:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.548 12:39:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.548 Malloc0 00:05:14.548 12:39:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.548 Malloc1 00:05:14.548 12:39:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.548 12:39:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.810 /dev/nbd0 00:05:14.810 12:39:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.810 12:39:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.810 1+0 records in 00:05:14.810 1+0 records out 00:05:14.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209557 s, 19.5 MB/s 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.810 12:39:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.810 12:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.810 12:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.810 12:39:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.073 /dev/nbd1 00:05:15.073 12:39:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.073 12:39:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.073 1+0 records in 00:05:15.073 1+0 records out 00:05:15.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370586 s, 11.1 MB/s 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.073 12:39:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.073 12:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.073 12:39:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.073 12:39:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.073 12:39:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.073 12:39:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.334 12:39:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.334 { 00:05:15.334 "nbd_device": "/dev/nbd0", 00:05:15.334 "bdev_name": "Malloc0" 00:05:15.334 }, 00:05:15.334 { 00:05:15.334 "nbd_device": "/dev/nbd1", 00:05:15.334 "bdev_name": "Malloc1" 00:05:15.334 } 00:05:15.334 ]' 00:05:15.334 12:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.334 { 00:05:15.334 "nbd_device": "/dev/nbd0", 00:05:15.334 "bdev_name": "Malloc0" 00:05:15.334 }, 00:05:15.334 { 00:05:15.334 "nbd_device": "/dev/nbd1", 00:05:15.334 "bdev_name": "Malloc1" 00:05:15.334 } 00:05:15.334 ]' 00:05:15.334 12:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.334 12:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.334 /dev/nbd1' 00:05:15.334 12:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.334 /dev/nbd1' 00:05:15.334 12:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.334 12:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.334 12:39:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.334 12:39:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.335 256+0 records in 00:05:15.335 256+0 records out 00:05:15.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121405 s, 86.4 MB/s 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.335 256+0 records in 00:05:15.335 256+0 records out 00:05:15.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185448 s, 56.5 MB/s 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.335 256+0 records in 00:05:15.335 256+0 records out 00:05:15.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189693 s, 55.3 MB/s 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.335 12:39:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.595 12:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.595 12:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.595 12:39:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.595 12:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.595 12:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.595 12:39:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.595 12:39:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.595 12:39:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.595 12:39:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.595 12:39:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.855 12:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.855 12:39:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.856 12:39:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.856 12:39:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.117 12:39:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.377 [2024-11-25 12:39:56.041860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.377 [2024-11-25 12:39:56.078279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.377 [2024-11-25 12:39:56.078282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.377 [2024-11-25 12:39:56.110436] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.377 [2024-11-25 12:39:56.110473] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.681 12:39:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 384981 /var/tmp/spdk-nbd.sock 00:05:19.681 12:39:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 384981 ']' 00:05:19.681 12:39:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.681 12:39:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.681 12:39:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.681 12:39:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.681 12:39:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.681 12:39:59 event.app_repeat -- event/event.sh@39 -- # killprocess 384981 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 384981 ']' 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 384981 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 384981 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 384981' 00:05:19.681 killing process with pid 384981 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@973 -- # kill 384981 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@978 -- # wait 384981 00:05:19.681 spdk_app_start is called in Round 0. 00:05:19.681 Shutdown signal received, stop current app iteration 00:05:19.681 Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 reinitialization... 00:05:19.681 spdk_app_start is called in Round 1. 00:05:19.681 Shutdown signal received, stop current app iteration 00:05:19.681 Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 reinitialization... 00:05:19.681 spdk_app_start is called in Round 2. 00:05:19.681 Shutdown signal received, stop current app iteration 00:05:19.681 Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 reinitialization... 00:05:19.681 spdk_app_start is called in Round 3. 00:05:19.681 Shutdown signal received, stop current app iteration 00:05:19.681 12:39:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:19.681 12:39:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:19.681 00:05:19.681 real 0m15.574s 00:05:19.681 user 0m33.826s 00:05:19.681 sys 0m2.301s 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.681 12:39:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.681 ************************************ 00:05:19.681 END TEST app_repeat 00:05:19.681 ************************************ 00:05:19.681 12:39:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:19.681 12:39:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:19.681 12:39:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.681 12:39:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.681 12:39:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.681 ************************************ 00:05:19.682 START TEST cpu_locks 00:05:19.682 ************************************ 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:19.682 * Looking for test storage... 00:05:19.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.682 12:39:59 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.682 --rc genhtml_branch_coverage=1 00:05:19.682 --rc genhtml_function_coverage=1 00:05:19.682 --rc genhtml_legend=1 00:05:19.682 --rc geninfo_all_blocks=1 00:05:19.682 --rc geninfo_unexecuted_blocks=1 00:05:19.682 00:05:19.682 ' 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.682 --rc genhtml_branch_coverage=1 00:05:19.682 --rc genhtml_function_coverage=1 00:05:19.682 --rc genhtml_legend=1 00:05:19.682 --rc geninfo_all_blocks=1 00:05:19.682 --rc geninfo_unexecuted_blocks=1 00:05:19.682 00:05:19.682 ' 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.682 --rc genhtml_branch_coverage=1 00:05:19.682 --rc genhtml_function_coverage=1 00:05:19.682 --rc genhtml_legend=1 00:05:19.682 --rc geninfo_all_blocks=1 00:05:19.682 --rc geninfo_unexecuted_blocks=1 00:05:19.682 00:05:19.682 ' 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.682 --rc genhtml_branch_coverage=1 00:05:19.682 --rc genhtml_function_coverage=1 00:05:19.682 --rc genhtml_legend=1 00:05:19.682 --rc geninfo_all_blocks=1 00:05:19.682 --rc geninfo_unexecuted_blocks=1 00:05:19.682 00:05:19.682 ' 00:05:19.682 12:39:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:19.682 12:39:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:19.682 12:39:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:19.682 12:39:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.682 12:39:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.682 ************************************ 00:05:19.682 START TEST default_locks 00:05:19.682 ************************************ 00:05:19.682 12:39:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:19.682 12:39:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=388267 00:05:19.682 12:39:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 388267 00:05:19.682 12:39:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.682 12:39:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 388267 ']' 00:05:19.682 12:39:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.682 12:39:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.682 12:39:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.682 12:39:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.682 12:39:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.943 [2024-11-25 12:39:59.625813] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:19.943 [2024-11-25 12:39:59.625885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388267 ] 00:05:19.943 [2024-11-25 12:39:59.711459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.943 [2024-11-25 12:39:59.753817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.886 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.886 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:20.886 12:40:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 388267 00:05:20.886 12:40:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 388267 00:05:20.886 12:40:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.146 lslocks: write error 00:05:21.146 12:40:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 388267 00:05:21.146 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 388267 ']' 00:05:21.146 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 388267 00:05:21.146 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:21.146 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.146 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 388267 00:05:21.146 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.146 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.146 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 388267' 00:05:21.146 killing process with pid 388267 00:05:21.146 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 388267 00:05:21.146 12:40:00 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 388267 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 388267 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 388267 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 388267 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 388267 ']' 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (388267) - No such process 00:05:21.407 ERROR: process (pid: 388267) is no longer running 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:21.407 00:05:21.407 real 0m1.629s 00:05:21.407 user 0m1.759s 00:05:21.407 sys 0m0.560s 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.407 12:40:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.407 ************************************ 00:05:21.407 END TEST default_locks 00:05:21.407 ************************************ 00:05:21.407 12:40:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:21.407 12:40:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.407 12:40:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.407 12:40:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.407 ************************************ 00:05:21.407 START TEST default_locks_via_rpc 00:05:21.407 ************************************ 00:05:21.407 12:40:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:21.407 12:40:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=388639 00:05:21.407 12:40:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 388639 00:05:21.407 12:40:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.407 12:40:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 388639 ']' 00:05:21.407 12:40:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.407 12:40:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.407 12:40:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.407 12:40:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.407 12:40:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.667 [2024-11-25 12:40:01.327984] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:21.667 [2024-11-25 12:40:01.328041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388639 ] 00:05:21.667 [2024-11-25 12:40:01.408712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.667 [2024-11-25 12:40:01.449514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.238 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.238 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:22.238 12:40:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:22.238 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.238 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.238 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.238 12:40:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:22.238 12:40:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.239 12:40:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.239 12:40:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.239 12:40:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:22.239 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.239 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.239 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.239 12:40:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 388639 00:05:22.239 12:40:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 388639 00:05:22.239 12:40:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 388639 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 388639 ']' 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 388639 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 388639 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 388639' 00:05:22.912 killing process with pid 388639 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 388639 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 388639 00:05:22.912 00:05:22.912 real 0m1.496s 00:05:22.912 user 0m1.616s 00:05:22.912 sys 0m0.495s 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.912 12:40:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.912 ************************************ 00:05:22.912 END TEST default_locks_via_rpc 00:05:22.912 ************************************ 00:05:22.912 12:40:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:22.912 12:40:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.912 12:40:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.912 12:40:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.242 ************************************ 00:05:23.242 START TEST non_locking_app_on_locked_coremask 00:05:23.242 ************************************ 00:05:23.242 12:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:23.242 12:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=388988 00:05:23.242 12:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 388988 /var/tmp/spdk.sock 00:05:23.242 12:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.242 12:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 388988 ']' 00:05:23.242 12:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.243 12:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.243 12:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.243 12:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.243 12:40:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.243 [2024-11-25 12:40:02.900810] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:23.243 [2024-11-25 12:40:02.900871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388988 ] 00:05:23.243 [2024-11-25 12:40:02.982171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.243 [2024-11-25 12:40:03.021063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.814 12:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.814 12:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:23.814 12:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:23.814 12:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=389321 00:05:23.814 12:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 389321 /var/tmp/spdk2.sock 00:05:23.814 12:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 389321 ']' 00:05:23.814 12:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.814 12:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.814 12:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.814 12:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.814 12:40:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.076 [2024-11-25 12:40:03.720715] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:24.076 [2024-11-25 12:40:03.720770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389321 ] 00:05:24.076 [2024-11-25 12:40:03.842352] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.076 [2024-11-25 12:40:03.842377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.076 [2024-11-25 12:40:03.914340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.647 12:40:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.647 12:40:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:24.647 12:40:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 388988 00:05:24.647 12:40:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 388988 00:05:24.647 12:40:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.219 lslocks: write error 00:05:25.219 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 388988 00:05:25.219 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 388988 ']' 00:05:25.219 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 388988 00:05:25.219 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.219 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.219 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 388988 00:05:25.481 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.481 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.481 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 388988' 00:05:25.481 killing process with pid 388988 00:05:25.481 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 388988 00:05:25.481 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 388988 00:05:25.743 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 389321 00:05:25.743 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 389321 ']' 00:05:25.743 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 389321 00:05:25.743 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.743 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.743 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 389321 00:05:25.743 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.743 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.743 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 389321' 00:05:25.743 killing process with pid 389321 00:05:25.743 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 389321 00:05:25.743 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 389321 00:05:26.004 00:05:26.004 real 0m2.990s 00:05:26.004 user 0m3.281s 00:05:26.004 sys 0m0.905s 00:05:26.004 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.004 12:40:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.004 ************************************ 00:05:26.004 END TEST non_locking_app_on_locked_coremask 00:05:26.004 ************************************ 00:05:26.004 12:40:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:26.004 12:40:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.004 12:40:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.004 12:40:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.265 ************************************ 00:05:26.265 START TEST locking_app_on_unlocked_coremask 00:05:26.265 ************************************ 00:05:26.265 12:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:26.265 12:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=389703 00:05:26.265 12:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 389703 /var/tmp/spdk.sock 00:05:26.265 12:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:26.265 12:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 389703 ']' 00:05:26.265 12:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.265 12:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.265 12:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.265 12:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.265 12:40:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.265 [2024-11-25 12:40:05.976989] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:26.265 [2024-11-25 12:40:05.977053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389703 ] 00:05:26.265 [2024-11-25 12:40:06.062765] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.265 [2024-11-25 12:40:06.062807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.265 [2024-11-25 12:40:06.104593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.206 12:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.206 12:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.206 12:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:27.206 12:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=389915 00:05:27.206 12:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 389915 /var/tmp/spdk2.sock 00:05:27.206 12:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 389915 ']' 00:05:27.206 12:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.206 12:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.206 12:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.206 12:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.206 12:40:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.206 [2024-11-25 12:40:06.799786] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:27.206 [2024-11-25 12:40:06.799830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389915 ] 00:05:27.206 [2024-11-25 12:40:06.914259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.206 [2024-11-25 12:40:06.986604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.778 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.778 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.778 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 389915 00:05:27.778 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 389915 00:05:27.778 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.045 lslocks: write error 00:05:28.045 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 389703 00:05:28.045 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 389703 ']' 00:05:28.045 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 389703 00:05:28.045 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.045 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.045 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 389703 00:05:28.045 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.045 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.045 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 389703' 00:05:28.045 killing process with pid 389703 00:05:28.045 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 389703 00:05:28.045 12:40:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 389703 00:05:28.617 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 389915 00:05:28.617 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 389915 ']' 00:05:28.617 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 389915 00:05:28.617 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.618 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.618 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 389915 00:05:28.618 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.618 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.618 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 389915' 00:05:28.618 killing process with pid 389915 00:05:28.618 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 389915 00:05:28.618 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 389915 00:05:28.879 00:05:28.879 real 0m2.703s 00:05:28.879 user 0m3.002s 00:05:28.879 sys 0m0.781s 00:05:28.879 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.879 12:40:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.879 ************************************ 00:05:28.879 END TEST locking_app_on_unlocked_coremask 00:05:28.879 ************************************ 00:05:28.879 12:40:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:28.879 12:40:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.879 12:40:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.879 12:40:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.879 ************************************ 00:05:28.879 START TEST locking_app_on_locked_coremask 00:05:28.879 ************************************ 00:05:28.879 12:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:28.880 12:40:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=390401 00:05:28.880 12:40:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 390401 /var/tmp/spdk.sock 00:05:28.880 12:40:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.880 12:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 390401 ']' 00:05:28.880 12:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.880 12:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.880 12:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.880 12:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.880 12:40:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.880 [2024-11-25 12:40:08.745735] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:28.880 [2024-11-25 12:40:08.745788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390401 ] 00:05:29.140 [2024-11-25 12:40:08.824561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.140 [2024-11-25 12:40:08.859974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=390417 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 390417 /var/tmp/spdk2.sock 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 390417 /var/tmp/spdk2.sock 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 390417 /var/tmp/spdk2.sock 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 390417 ']' 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.713 12:40:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.713 [2024-11-25 12:40:09.599557] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:29.713 [2024-11-25 12:40:09.599612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390417 ] 00:05:29.975 [2024-11-25 12:40:09.723938] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 390401 has claimed it. 00:05:29.975 [2024-11-25 12:40:09.723986] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:30.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (390417) - No such process 00:05:30.548 ERROR: process (pid: 390417) is no longer running 00:05:30.548 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.548 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:30.548 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:30.548 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.548 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:30.548 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.548 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 390401 00:05:30.548 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 390401 00:05:30.548 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.120 lslocks: write error 00:05:31.120 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 390401 00:05:31.120 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 390401 ']' 00:05:31.120 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 390401 00:05:31.120 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:31.120 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.120 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390401 00:05:31.121 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.121 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.121 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390401' 00:05:31.121 killing process with pid 390401 00:05:31.121 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 390401 00:05:31.121 12:40:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 390401 00:05:31.382 00:05:31.382 real 0m2.364s 00:05:31.382 user 0m2.672s 00:05:31.382 sys 0m0.667s 00:05:31.382 12:40:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.382 12:40:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.382 ************************************ 00:05:31.382 END TEST locking_app_on_locked_coremask 00:05:31.382 ************************************ 00:05:31.382 12:40:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:31.382 12:40:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.382 12:40:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.382 12:40:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.382 ************************************ 00:05:31.382 START TEST locking_overlapped_coremask 00:05:31.382 ************************************ 00:05:31.382 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:31.382 12:40:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=390786 00:05:31.382 12:40:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 390786 /var/tmp/spdk.sock 00:05:31.382 12:40:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:31.382 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 390786 ']' 00:05:31.382 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.382 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.382 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.382 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.382 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.382 [2024-11-25 12:40:11.193301] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:31.382 [2024-11-25 12:40:11.193353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390786 ] 00:05:31.382 [2024-11-25 12:40:11.273569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.643 [2024-11-25 12:40:11.314766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.643 [2024-11-25 12:40:11.314941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.643 [2024-11-25 12:40:11.315149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=391064 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 391064 /var/tmp/spdk2.sock 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 391064 /var/tmp/spdk2.sock 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 391064 /var/tmp/spdk2.sock 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 391064 ']' 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.217 12:40:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.217 [2024-11-25 12:40:12.036286] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:32.217 [2024-11-25 12:40:12.036339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391064 ] 00:05:32.478 [2024-11-25 12:40:12.133646] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 390786 has claimed it. 00:05:32.478 [2024-11-25 12:40:12.133677] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:33.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (391064) - No such process 00:05:33.052 ERROR: process (pid: 391064) is no longer running 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 390786 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 390786 ']' 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 390786 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390786 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390786' 00:05:33.052 killing process with pid 390786 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 390786 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 390786 00:05:33.052 00:05:33.052 real 0m1.803s 00:05:33.052 user 0m5.192s 00:05:33.052 sys 0m0.386s 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.052 12:40:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.052 ************************************ 00:05:33.052 END TEST locking_overlapped_coremask 00:05:33.052 ************************************ 00:05:33.314 12:40:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:33.314 12:40:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.314 12:40:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.314 12:40:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.314 ************************************ 00:05:33.314 START TEST locking_overlapped_coremask_via_rpc 00:05:33.314 ************************************ 00:05:33.314 12:40:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:33.314 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=391156 00:05:33.314 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 391156 /var/tmp/spdk.sock 00:05:33.314 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:33.314 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 391156 ']' 00:05:33.314 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.314 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.314 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.314 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.314 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.314 [2024-11-25 12:40:13.057626] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:33.314 [2024-11-25 12:40:13.057676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391156 ] 00:05:33.314 [2024-11-25 12:40:13.136658] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.314 [2024-11-25 12:40:13.136690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.314 [2024-11-25 12:40:13.174478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.314 [2024-11-25 12:40:13.174591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.314 [2024-11-25 12:40:13.174594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.259 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.259 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.259 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=391490 00:05:34.259 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 391490 /var/tmp/spdk2.sock 00:05:34.259 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 391490 ']' 00:05:34.259 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:34.259 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.259 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.259 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.259 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.259 12:40:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.259 [2024-11-25 12:40:13.913699] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:34.259 [2024-11-25 12:40:13.913753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391490 ] 00:05:34.259 [2024-11-25 12:40:14.012469] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.259 [2024-11-25 12:40:14.012493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.259 [2024-11-25 12:40:14.071904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.260 [2024-11-25 12:40:14.074987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.260 [2024-11-25 12:40:14.074989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.834 [2024-11-25 12:40:14.719930] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 391156 has claimed it. 00:05:34.834 request: 00:05:34.834 { 00:05:34.834 "method": "framework_enable_cpumask_locks", 00:05:34.834 "req_id": 1 00:05:34.834 } 00:05:34.834 Got JSON-RPC error response 00:05:34.834 response: 00:05:34.834 { 00:05:34.834 "code": -32603, 00:05:34.834 "message": "Failed to claim CPU core: 2" 00:05:34.834 } 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 391156 /var/tmp/spdk.sock 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 391156 ']' 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.834 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.097 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.097 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:35.097 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 391490 /var/tmp/spdk2.sock 00:05:35.097 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 391490 ']' 00:05:35.097 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.097 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.097 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.097 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.097 12:40:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.360 12:40:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.360 12:40:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:35.360 12:40:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:35.360 12:40:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:35.360 12:40:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:35.360 12:40:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:35.360 00:05:35.360 real 0m2.094s 00:05:35.360 user 0m0.865s 00:05:35.360 sys 0m0.149s 00:05:35.360 12:40:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.360 12:40:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.360 ************************************ 00:05:35.360 END TEST locking_overlapped_coremask_via_rpc 00:05:35.360 ************************************ 00:05:35.360 12:40:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:35.360 12:40:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 391156 ]] 00:05:35.360 12:40:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 391156 00:05:35.360 12:40:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 391156 ']' 00:05:35.360 12:40:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 391156 00:05:35.360 12:40:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:35.360 12:40:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.360 12:40:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391156 00:05:35.360 12:40:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.360 12:40:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.360 12:40:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391156' 00:05:35.360 killing process with pid 391156 00:05:35.360 12:40:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 391156 00:05:35.360 12:40:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 391156 00:05:35.621 12:40:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 391490 ]] 00:05:35.621 12:40:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 391490 00:05:35.621 12:40:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 391490 ']' 00:05:35.621 12:40:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 391490 00:05:35.621 12:40:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:35.621 12:40:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.621 12:40:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391490 00:05:35.621 12:40:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:35.621 12:40:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:35.621 12:40:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391490' 00:05:35.621 killing process with pid 391490 00:05:35.621 12:40:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 391490 00:05:35.621 12:40:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 391490 00:05:35.883 12:40:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.883 12:40:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:35.883 12:40:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 391156 ]] 00:05:35.883 12:40:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 391156 00:05:35.883 12:40:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 391156 ']' 00:05:35.883 12:40:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 391156 00:05:35.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (391156) - No such process 00:05:35.883 12:40:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 391156 is not found' 00:05:35.883 Process with pid 391156 is not found 00:05:35.883 12:40:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 391490 ]] 00:05:35.883 12:40:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 391490 00:05:35.883 12:40:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 391490 ']' 00:05:35.883 12:40:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 391490 00:05:35.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (391490) - No such process 00:05:35.883 12:40:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 391490 is not found' 00:05:35.883 Process with pid 391490 is not found 00:05:35.883 12:40:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.883 00:05:35.883 real 0m16.341s 00:05:35.883 user 0m28.594s 00:05:35.883 sys 0m4.877s 00:05:35.883 12:40:15 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.883 12:40:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.883 ************************************ 00:05:35.883 END TEST cpu_locks 00:05:35.883 ************************************ 00:05:35.883 00:05:35.883 real 0m41.388s 00:05:35.883 user 1m20.136s 00:05:35.883 sys 0m8.252s 00:05:35.883 12:40:15 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.883 12:40:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.883 ************************************ 00:05:35.883 END TEST event 00:05:35.883 ************************************ 00:05:35.883 12:40:15 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:35.883 12:40:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.883 12:40:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.883 12:40:15 -- common/autotest_common.sh@10 -- # set +x 00:05:35.883 ************************************ 00:05:35.883 START TEST thread 00:05:35.883 ************************************ 00:05:35.883 12:40:15 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:36.145 * Looking for test storage... 00:05:36.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.145 12:40:15 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.145 12:40:15 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.145 12:40:15 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.145 12:40:15 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.145 12:40:15 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.145 12:40:15 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.145 12:40:15 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.145 12:40:15 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.145 12:40:15 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.145 12:40:15 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.145 12:40:15 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.145 12:40:15 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:36.145 12:40:15 thread -- scripts/common.sh@345 -- # : 1 00:05:36.145 12:40:15 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.145 12:40:15 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.145 12:40:15 thread -- scripts/common.sh@365 -- # decimal 1 00:05:36.145 12:40:15 thread -- scripts/common.sh@353 -- # local d=1 00:05:36.145 12:40:15 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.145 12:40:15 thread -- scripts/common.sh@355 -- # echo 1 00:05:36.145 12:40:15 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.145 12:40:15 thread -- scripts/common.sh@366 -- # decimal 2 00:05:36.145 12:40:15 thread -- scripts/common.sh@353 -- # local d=2 00:05:36.145 12:40:15 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.145 12:40:15 thread -- scripts/common.sh@355 -- # echo 2 00:05:36.145 12:40:15 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.145 12:40:15 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.145 12:40:15 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.145 12:40:15 thread -- scripts/common.sh@368 -- # return 0 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.145 --rc genhtml_branch_coverage=1 00:05:36.145 --rc genhtml_function_coverage=1 00:05:36.145 --rc genhtml_legend=1 00:05:36.145 --rc geninfo_all_blocks=1 00:05:36.145 --rc geninfo_unexecuted_blocks=1 00:05:36.145 00:05:36.145 ' 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.145 --rc genhtml_branch_coverage=1 00:05:36.145 --rc genhtml_function_coverage=1 00:05:36.145 --rc genhtml_legend=1 00:05:36.145 --rc geninfo_all_blocks=1 00:05:36.145 --rc geninfo_unexecuted_blocks=1 00:05:36.145 00:05:36.145 ' 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.145 --rc genhtml_branch_coverage=1 00:05:36.145 --rc genhtml_function_coverage=1 00:05:36.145 --rc genhtml_legend=1 00:05:36.145 --rc geninfo_all_blocks=1 00:05:36.145 --rc geninfo_unexecuted_blocks=1 00:05:36.145 00:05:36.145 ' 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.145 --rc genhtml_branch_coverage=1 00:05:36.145 --rc genhtml_function_coverage=1 00:05:36.145 --rc genhtml_legend=1 00:05:36.145 --rc geninfo_all_blocks=1 00:05:36.145 --rc geninfo_unexecuted_blocks=1 00:05:36.145 00:05:36.145 ' 00:05:36.145 12:40:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.145 12:40:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.145 ************************************ 00:05:36.145 START TEST thread_poller_perf 00:05:36.145 ************************************ 00:05:36.145 12:40:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.145 [2024-11-25 12:40:16.042731] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:36.145 [2024-11-25 12:40:16.042827] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391940 ] 00:05:36.406 [2024-11-25 12:40:16.125431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.406 [2024-11-25 12:40:16.161283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.406 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:37.349 [2024-11-25T11:40:17.252Z] ====================================== 00:05:37.349 [2024-11-25T11:40:17.252Z] busy:2406619998 (cyc) 00:05:37.349 [2024-11-25T11:40:17.252Z] total_run_count: 287000 00:05:37.349 [2024-11-25T11:40:17.252Z] tsc_hz: 2400000000 (cyc) 00:05:37.349 [2024-11-25T11:40:17.253Z] ====================================== 00:05:37.350 [2024-11-25T11:40:17.253Z] poller_cost: 8385 (cyc), 3493 (nsec) 00:05:37.350 00:05:37.350 real 0m1.180s 00:05:37.350 user 0m1.099s 00:05:37.350 sys 0m0.077s 00:05:37.350 12:40:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.350 12:40:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.350 ************************************ 00:05:37.350 END TEST thread_poller_perf 00:05:37.350 ************************************ 00:05:37.350 12:40:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.350 12:40:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:37.350 12:40:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.350 12:40:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.611 ************************************ 00:05:37.611 START TEST thread_poller_perf 00:05:37.611 ************************************ 00:05:37.611 12:40:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.611 [2024-11-25 12:40:17.301730] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:37.611 [2024-11-25 12:40:17.301833] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392294 ] 00:05:37.611 [2024-11-25 12:40:17.383208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.611 [2024-11-25 12:40:17.419433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.611 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:38.554 [2024-11-25T11:40:18.457Z] ====================================== 00:05:38.554 [2024-11-25T11:40:18.457Z] busy:2402106186 (cyc) 00:05:38.554 [2024-11-25T11:40:18.457Z] total_run_count: 3810000 00:05:38.554 [2024-11-25T11:40:18.457Z] tsc_hz: 2400000000 (cyc) 00:05:38.554 [2024-11-25T11:40:18.457Z] ====================================== 00:05:38.554 [2024-11-25T11:40:18.457Z] poller_cost: 630 (cyc), 262 (nsec) 00:05:38.554 00:05:38.554 real 0m1.172s 00:05:38.554 user 0m1.103s 00:05:38.554 sys 0m0.066s 00:05:38.554 12:40:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.554 12:40:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.554 ************************************ 00:05:38.554 END TEST thread_poller_perf 00:05:38.554 ************************************ 00:05:38.816 12:40:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:38.816 00:05:38.816 real 0m2.705s 00:05:38.816 user 0m2.366s 00:05:38.816 sys 0m0.350s 00:05:38.816 12:40:18 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.816 12:40:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.816 ************************************ 00:05:38.816 END TEST thread 00:05:38.816 ************************************ 00:05:38.816 12:40:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:38.816 12:40:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:38.816 12:40:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.816 12:40:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.816 12:40:18 -- common/autotest_common.sh@10 -- # set +x 00:05:38.816 ************************************ 00:05:38.816 START TEST app_cmdline 00:05:38.816 ************************************ 00:05:38.816 12:40:18 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:38.816 * Looking for test storage... 00:05:38.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:38.816 12:40:18 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.816 12:40:18 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.816 12:40:18 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.078 12:40:18 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.078 --rc genhtml_branch_coverage=1 00:05:39.078 --rc genhtml_function_coverage=1 00:05:39.078 --rc genhtml_legend=1 00:05:39.078 --rc geninfo_all_blocks=1 00:05:39.078 --rc geninfo_unexecuted_blocks=1 00:05:39.078 00:05:39.078 ' 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.078 --rc genhtml_branch_coverage=1 00:05:39.078 --rc genhtml_function_coverage=1 00:05:39.078 --rc genhtml_legend=1 00:05:39.078 --rc geninfo_all_blocks=1 00:05:39.078 --rc geninfo_unexecuted_blocks=1 00:05:39.078 00:05:39.078 ' 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.078 --rc genhtml_branch_coverage=1 00:05:39.078 --rc genhtml_function_coverage=1 00:05:39.078 --rc genhtml_legend=1 00:05:39.078 --rc geninfo_all_blocks=1 00:05:39.078 --rc geninfo_unexecuted_blocks=1 00:05:39.078 00:05:39.078 ' 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.078 --rc genhtml_branch_coverage=1 00:05:39.078 --rc genhtml_function_coverage=1 00:05:39.078 --rc genhtml_legend=1 00:05:39.078 --rc geninfo_all_blocks=1 00:05:39.078 --rc geninfo_unexecuted_blocks=1 00:05:39.078 00:05:39.078 ' 00:05:39.078 12:40:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:39.078 12:40:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=392648 00:05:39.078 12:40:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 392648 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 392648 ']' 00:05:39.078 12:40:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.078 12:40:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:39.078 [2024-11-25 12:40:18.829350] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:39.078 [2024-11-25 12:40:18.829422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392648 ] 00:05:39.078 [2024-11-25 12:40:18.913980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.078 [2024-11-25 12:40:18.955284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:40.023 12:40:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:40.023 { 00:05:40.023 "version": "SPDK v25.01-pre git sha1 e4a86cc92", 00:05:40.023 "fields": { 00:05:40.023 "major": 25, 00:05:40.023 "minor": 1, 00:05:40.023 "patch": 0, 00:05:40.023 "suffix": "-pre", 00:05:40.023 "commit": "e4a86cc92" 00:05:40.023 } 00:05:40.023 } 00:05:40.023 12:40:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:40.023 12:40:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:40.023 12:40:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:40.023 12:40:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:40.023 12:40:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:40.023 12:40:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:40.023 12:40:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.023 12:40:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:40.023 12:40:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:40.023 12:40:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:40.023 12:40:19 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:40.285 request: 00:05:40.285 { 00:05:40.285 "method": "env_dpdk_get_mem_stats", 00:05:40.285 "req_id": 1 00:05:40.285 } 00:05:40.285 Got JSON-RPC error response 00:05:40.285 response: 00:05:40.285 { 00:05:40.285 "code": -32601, 00:05:40.285 "message": "Method not found" 00:05:40.285 } 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.285 12:40:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 392648 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 392648 ']' 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 392648 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 392648 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 392648' 00:05:40.285 killing process with pid 392648 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@973 -- # kill 392648 00:05:40.285 12:40:20 app_cmdline -- common/autotest_common.sh@978 -- # wait 392648 00:05:40.546 00:05:40.546 real 0m1.746s 00:05:40.546 user 0m2.097s 00:05:40.546 sys 0m0.461s 00:05:40.546 12:40:20 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.546 12:40:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.546 ************************************ 00:05:40.546 END TEST app_cmdline 00:05:40.546 ************************************ 00:05:40.546 12:40:20 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:40.546 12:40:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.546 12:40:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.546 12:40:20 -- common/autotest_common.sh@10 -- # set +x 00:05:40.546 ************************************ 00:05:40.546 START TEST version 00:05:40.546 ************************************ 00:05:40.546 12:40:20 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:40.808 * Looking for test storage... 00:05:40.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:40.808 12:40:20 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.808 12:40:20 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.808 12:40:20 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.808 12:40:20 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.808 12:40:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.808 12:40:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.808 12:40:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.808 12:40:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.808 12:40:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.808 12:40:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.808 12:40:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.808 12:40:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.808 12:40:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.808 12:40:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.808 12:40:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.808 12:40:20 version -- scripts/common.sh@344 -- # case "$op" in 00:05:40.808 12:40:20 version -- scripts/common.sh@345 -- # : 1 00:05:40.808 12:40:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.808 12:40:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.808 12:40:20 version -- scripts/common.sh@365 -- # decimal 1 00:05:40.808 12:40:20 version -- scripts/common.sh@353 -- # local d=1 00:05:40.808 12:40:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.808 12:40:20 version -- scripts/common.sh@355 -- # echo 1 00:05:40.808 12:40:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.808 12:40:20 version -- scripts/common.sh@366 -- # decimal 2 00:05:40.808 12:40:20 version -- scripts/common.sh@353 -- # local d=2 00:05:40.808 12:40:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.808 12:40:20 version -- scripts/common.sh@355 -- # echo 2 00:05:40.808 12:40:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.808 12:40:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.808 12:40:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.808 12:40:20 version -- scripts/common.sh@368 -- # return 0 00:05:40.808 12:40:20 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.808 12:40:20 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.808 --rc genhtml_branch_coverage=1 00:05:40.808 --rc genhtml_function_coverage=1 00:05:40.808 --rc genhtml_legend=1 00:05:40.808 --rc geninfo_all_blocks=1 00:05:40.808 --rc geninfo_unexecuted_blocks=1 00:05:40.808 00:05:40.808 ' 00:05:40.808 12:40:20 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.808 --rc genhtml_branch_coverage=1 00:05:40.808 --rc genhtml_function_coverage=1 00:05:40.808 --rc genhtml_legend=1 00:05:40.808 --rc geninfo_all_blocks=1 00:05:40.808 --rc geninfo_unexecuted_blocks=1 00:05:40.808 00:05:40.808 ' 00:05:40.808 12:40:20 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.808 --rc genhtml_branch_coverage=1 00:05:40.808 --rc genhtml_function_coverage=1 00:05:40.808 --rc genhtml_legend=1 00:05:40.808 --rc geninfo_all_blocks=1 00:05:40.808 --rc geninfo_unexecuted_blocks=1 00:05:40.808 00:05:40.808 ' 00:05:40.808 12:40:20 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.808 --rc genhtml_branch_coverage=1 00:05:40.808 --rc genhtml_function_coverage=1 00:05:40.808 --rc genhtml_legend=1 00:05:40.808 --rc geninfo_all_blocks=1 00:05:40.808 --rc geninfo_unexecuted_blocks=1 00:05:40.808 00:05:40.808 ' 00:05:40.808 12:40:20 version -- app/version.sh@17 -- # get_header_version major 00:05:40.808 12:40:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:40.808 12:40:20 version -- app/version.sh@14 -- # cut -f2 00:05:40.808 12:40:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.808 12:40:20 version -- app/version.sh@17 -- # major=25 00:05:40.808 12:40:20 version -- app/version.sh@18 -- # get_header_version minor 00:05:40.808 12:40:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:40.808 12:40:20 version -- app/version.sh@14 -- # cut -f2 00:05:40.808 12:40:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.808 12:40:20 version -- app/version.sh@18 -- # minor=1 00:05:40.808 12:40:20 version -- app/version.sh@19 -- # get_header_version patch 00:05:40.808 12:40:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:40.808 12:40:20 version -- app/version.sh@14 -- # cut -f2 00:05:40.808 12:40:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.808 12:40:20 version -- app/version.sh@19 -- # patch=0 00:05:40.808 12:40:20 version -- app/version.sh@20 -- # get_header_version suffix 00:05:40.808 12:40:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:40.808 12:40:20 version -- app/version.sh@14 -- # cut -f2 00:05:40.808 12:40:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.808 12:40:20 version -- app/version.sh@20 -- # suffix=-pre 00:05:40.808 12:40:20 version -- app/version.sh@22 -- # version=25.1 00:05:40.808 12:40:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:40.808 12:40:20 version -- app/version.sh@28 -- # version=25.1rc0 00:05:40.808 12:40:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:40.808 12:40:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:40.808 12:40:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:40.808 12:40:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:40.808 00:05:40.808 real 0m0.281s 00:05:40.808 user 0m0.161s 00:05:40.808 sys 0m0.169s 00:05:40.808 12:40:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.808 12:40:20 version -- common/autotest_common.sh@10 -- # set +x 00:05:40.808 ************************************ 00:05:40.808 END TEST version 00:05:40.808 ************************************ 00:05:40.808 12:40:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:40.808 12:40:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:40.808 12:40:20 -- spdk/autotest.sh@194 -- # uname -s 00:05:40.808 12:40:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:41.071 12:40:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:41.071 12:40:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:41.071 12:40:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:41.071 12:40:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:41.071 12:40:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:41.071 12:40:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.071 12:40:20 -- common/autotest_common.sh@10 -- # set +x 00:05:41.071 12:40:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:41.071 12:40:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:41.071 12:40:20 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:41.071 12:40:20 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:41.071 12:40:20 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:41.071 12:40:20 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:41.071 12:40:20 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:41.071 12:40:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:41.071 12:40:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.071 12:40:20 -- common/autotest_common.sh@10 -- # set +x 00:05:41.071 ************************************ 00:05:41.071 START TEST nvmf_tcp 00:05:41.071 ************************************ 00:05:41.071 12:40:20 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:41.071 * Looking for test storage... 00:05:41.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:41.071 12:40:20 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.071 12:40:20 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.071 12:40:20 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.071 12:40:20 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.071 12:40:20 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.333 12:40:20 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:41.333 12:40:20 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.333 12:40:20 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.333 --rc genhtml_branch_coverage=1 00:05:41.333 --rc genhtml_function_coverage=1 00:05:41.333 --rc genhtml_legend=1 00:05:41.333 --rc geninfo_all_blocks=1 00:05:41.334 --rc geninfo_unexecuted_blocks=1 00:05:41.334 00:05:41.334 ' 00:05:41.334 12:40:20 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.334 --rc genhtml_branch_coverage=1 00:05:41.334 --rc genhtml_function_coverage=1 00:05:41.334 --rc genhtml_legend=1 00:05:41.334 --rc geninfo_all_blocks=1 00:05:41.334 --rc geninfo_unexecuted_blocks=1 00:05:41.334 00:05:41.334 ' 00:05:41.334 12:40:20 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.334 --rc genhtml_branch_coverage=1 00:05:41.334 --rc genhtml_function_coverage=1 00:05:41.334 --rc genhtml_legend=1 00:05:41.334 --rc geninfo_all_blocks=1 00:05:41.334 --rc geninfo_unexecuted_blocks=1 00:05:41.334 00:05:41.334 ' 00:05:41.334 12:40:20 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.334 --rc genhtml_branch_coverage=1 00:05:41.334 --rc genhtml_function_coverage=1 00:05:41.334 --rc genhtml_legend=1 00:05:41.334 --rc geninfo_all_blocks=1 00:05:41.334 --rc geninfo_unexecuted_blocks=1 00:05:41.334 00:05:41.334 ' 00:05:41.334 12:40:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:41.334 12:40:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:41.334 12:40:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:41.334 12:40:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:41.334 12:40:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.334 12:40:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.334 ************************************ 00:05:41.334 START TEST nvmf_target_core 00:05:41.334 ************************************ 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:41.334 * Looking for test storage... 00:05:41.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.334 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.597 --rc genhtml_branch_coverage=1 00:05:41.597 --rc genhtml_function_coverage=1 00:05:41.597 --rc genhtml_legend=1 00:05:41.597 --rc geninfo_all_blocks=1 00:05:41.597 --rc geninfo_unexecuted_blocks=1 00:05:41.597 00:05:41.597 ' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.597 --rc genhtml_branch_coverage=1 00:05:41.597 --rc genhtml_function_coverage=1 00:05:41.597 --rc genhtml_legend=1 00:05:41.597 --rc geninfo_all_blocks=1 00:05:41.597 --rc geninfo_unexecuted_blocks=1 00:05:41.597 00:05:41.597 ' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.597 --rc genhtml_branch_coverage=1 00:05:41.597 --rc genhtml_function_coverage=1 00:05:41.597 --rc genhtml_legend=1 00:05:41.597 --rc geninfo_all_blocks=1 00:05:41.597 --rc geninfo_unexecuted_blocks=1 00:05:41.597 00:05:41.597 ' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.597 --rc genhtml_branch_coverage=1 00:05:41.597 --rc genhtml_function_coverage=1 00:05:41.597 --rc genhtml_legend=1 00:05:41.597 --rc geninfo_all_blocks=1 00:05:41.597 --rc geninfo_unexecuted_blocks=1 00:05:41.597 00:05:41.597 ' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:41.597 ************************************ 00:05:41.597 START TEST nvmf_abort 00:05:41.597 ************************************ 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:41.597 * Looking for test storage... 00:05:41.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.597 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.860 --rc genhtml_branch_coverage=1 00:05:41.860 --rc genhtml_function_coverage=1 00:05:41.860 --rc genhtml_legend=1 00:05:41.860 --rc geninfo_all_blocks=1 00:05:41.860 --rc geninfo_unexecuted_blocks=1 00:05:41.860 00:05:41.860 ' 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.860 --rc genhtml_branch_coverage=1 00:05:41.860 --rc genhtml_function_coverage=1 00:05:41.860 --rc genhtml_legend=1 00:05:41.860 --rc geninfo_all_blocks=1 00:05:41.860 --rc geninfo_unexecuted_blocks=1 00:05:41.860 00:05:41.860 ' 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.860 --rc genhtml_branch_coverage=1 00:05:41.860 --rc genhtml_function_coverage=1 00:05:41.860 --rc genhtml_legend=1 00:05:41.860 --rc geninfo_all_blocks=1 00:05:41.860 --rc geninfo_unexecuted_blocks=1 00:05:41.860 00:05:41.860 ' 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.860 --rc genhtml_branch_coverage=1 00:05:41.860 --rc genhtml_function_coverage=1 00:05:41.860 --rc genhtml_legend=1 00:05:41.860 --rc geninfo_all_blocks=1 00:05:41.860 --rc geninfo_unexecuted_blocks=1 00:05:41.860 00:05:41.860 ' 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.860 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:41.861 12:40:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:50.011 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:50.011 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:50.011 Found net devices under 0000:31:00.0: cvl_0_0 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:50.011 Found net devices under 0000:31:00.1: cvl_0_1 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:50.011 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:50.012 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:50.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:50.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:05:50.274 00:05:50.274 --- 10.0.0.2 ping statistics --- 00:05:50.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.274 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:50.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:50.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:05:50.274 00:05:50.274 --- 10.0.0.1 ping statistics --- 00:05:50.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.274 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=397548 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 397548 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 397548 ']' 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.274 12:40:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:50.274 [2024-11-25 12:40:30.051670] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:05:50.274 [2024-11-25 12:40:30.051721] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:50.274 [2024-11-25 12:40:30.155952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.536 [2024-11-25 12:40:30.194053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:50.536 [2024-11-25 12:40:30.194091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:50.536 [2024-11-25 12:40:30.194099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:50.536 [2024-11-25 12:40:30.194106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:50.536 [2024-11-25 12:40:30.194112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:50.536 [2024-11-25 12:40:30.195533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.536 [2024-11-25 12:40:30.195689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.536 [2024-11-25 12:40:30.195690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.109 [2024-11-25 12:40:30.888797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.109 Malloc0 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.109 Delay0 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.109 [2024-11-25 12:40:30.968119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.109 12:40:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:51.370 [2024-11-25 12:40:31.098328] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:53.286 Initializing NVMe Controllers 00:05:53.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:53.286 controller IO queue size 128 less than required 00:05:53.286 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:53.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:53.286 Initialization complete. Launching workers. 00:05:53.286 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29066 00:05:53.286 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29127, failed to submit 62 00:05:53.286 success 29070, unsuccessful 57, failed 0 00:05:53.286 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:53.286 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.286 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.286 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.286 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:53.286 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:53.286 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:53.286 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:53.547 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:53.547 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:53.547 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:53.547 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:53.547 rmmod nvme_tcp 00:05:53.547 rmmod nvme_fabrics 00:05:53.547 rmmod nvme_keyring 00:05:53.547 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:53.547 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:53.547 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:53.547 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 397548 ']' 00:05:53.547 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 397548 00:05:53.547 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 397548 ']' 00:05:53.547 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 397548 00:05:53.548 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:53.548 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.548 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 397548 00:05:53.548 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:53.548 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:53.548 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 397548' 00:05:53.548 killing process with pid 397548 00:05:53.548 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 397548 00:05:53.548 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 397548 00:05:53.808 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:53.808 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:53.808 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:53.808 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:53.808 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:53.808 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:53.809 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:53.809 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:53.809 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:53.809 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:53.809 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:53.809 12:40:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.724 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:55.724 00:05:55.724 real 0m14.227s 00:05:55.724 user 0m13.945s 00:05:55.724 sys 0m7.254s 00:05:55.724 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.724 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:55.724 ************************************ 00:05:55.724 END TEST nvmf_abort 00:05:55.724 ************************************ 00:05:55.724 12:40:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:55.725 12:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:55.725 12:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.725 12:40:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:55.987 ************************************ 00:05:55.987 START TEST nvmf_ns_hotplug_stress 00:05:55.987 ************************************ 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:55.987 * Looking for test storage... 00:05:55.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.987 --rc genhtml_branch_coverage=1 00:05:55.987 --rc genhtml_function_coverage=1 00:05:55.987 --rc genhtml_legend=1 00:05:55.987 --rc geninfo_all_blocks=1 00:05:55.987 --rc geninfo_unexecuted_blocks=1 00:05:55.987 00:05:55.987 ' 00:05:55.987 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.987 --rc genhtml_branch_coverage=1 00:05:55.987 --rc genhtml_function_coverage=1 00:05:55.987 --rc genhtml_legend=1 00:05:55.987 --rc geninfo_all_blocks=1 00:05:55.987 --rc geninfo_unexecuted_blocks=1 00:05:55.987 00:05:55.987 ' 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.988 --rc genhtml_branch_coverage=1 00:05:55.988 --rc genhtml_function_coverage=1 00:05:55.988 --rc genhtml_legend=1 00:05:55.988 --rc geninfo_all_blocks=1 00:05:55.988 --rc geninfo_unexecuted_blocks=1 00:05:55.988 00:05:55.988 ' 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.988 --rc genhtml_branch_coverage=1 00:05:55.988 --rc genhtml_function_coverage=1 00:05:55.988 --rc genhtml_legend=1 00:05:55.988 --rc geninfo_all_blocks=1 00:05:55.988 --rc geninfo_unexecuted_blocks=1 00:05:55.988 00:05:55.988 ' 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:55.988 12:40:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:04.140 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:04.140 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.140 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:04.141 Found net devices under 0000:31:00.0: cvl_0_0 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:04.141 Found net devices under 0000:31:00.1: cvl_0_1 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:04.141 12:40:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:04.141 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:04.141 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:04.402 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:04.402 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:04.402 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:04.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:04.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:06:04.403 00:06:04.403 --- 10.0.0.2 ping statistics --- 00:06:04.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.403 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:04.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:04.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:06:04.403 00:06:04.403 --- 10.0.0.1 ping statistics --- 00:06:04.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.403 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=402952 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 402952 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 402952 ']' 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.403 12:40:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.403 [2024-11-25 12:40:44.246777] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:06:04.403 [2024-11-25 12:40:44.246841] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.664 [2024-11-25 12:40:44.356141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.664 [2024-11-25 12:40:44.408517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:04.664 [2024-11-25 12:40:44.408567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:04.664 [2024-11-25 12:40:44.408580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.664 [2024-11-25 12:40:44.408587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.664 [2024-11-25 12:40:44.408593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:04.664 [2024-11-25 12:40:44.410442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.664 [2024-11-25 12:40:44.410611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.664 [2024-11-25 12:40:44.410612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.235 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.235 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:05.235 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:05.235 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.235 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:05.235 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:05.235 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:05.236 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:05.496 [2024-11-25 12:40:45.237708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.496 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:05.758 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:05.758 [2024-11-25 12:40:45.591116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:05.758 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:06.019 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:06.281 Malloc0 00:06:06.281 12:40:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:06.281 Delay0 00:06:06.281 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.542 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:06.803 NULL1 00:06:06.803 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:06.803 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=403641 00:06:06.803 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:06.803 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:06.804 12:40:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.188 Read completed with error (sct=0, sc=11) 00:06:08.188 12:40:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.188 12:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:08.188 12:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:08.449 true 00:06:08.449 12:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:08.449 12:40:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.392 12:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.392 12:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:09.392 12:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:09.653 true 00:06:09.653 12:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:09.653 12:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.914 12:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.914 12:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:09.914 12:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:10.175 true 00:06:10.176 12:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:10.176 12:40:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.437 12:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.437 12:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:10.437 12:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:10.699 true 00:06:10.699 12:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:10.699 12:40:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.645 12:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.645 12:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:11.645 12:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:11.907 true 00:06:11.907 12:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:11.907 12:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.169 12:40:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.169 12:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:12.170 12:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:12.431 true 00:06:12.431 12:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:12.431 12:40:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.829 12:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.829 12:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:13.829 12:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:14.143 true 00:06:14.143 12:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:14.143 12:40:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.774 12:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.049 12:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:15.049 12:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:15.049 true 00:06:15.311 12:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:15.311 12:40:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.311 12:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.573 12:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:15.573 12:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:15.834 true 00:06:15.834 12:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:15.834 12:40:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.779 12:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.038 12:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:17.038 12:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:17.297 true 00:06:17.297 12:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:17.298 12:40:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.237 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.237 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:18.237 12:40:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:18.498 true 00:06:18.498 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:18.498 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.498 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.759 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:18.759 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:19.019 true 00:06:19.019 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:19.019 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.019 12:40:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.278 12:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:19.278 12:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:19.537 true 00:06:19.537 12:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:19.537 12:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.798 12:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.798 12:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:19.798 12:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:20.058 true 00:06:20.058 12:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:20.058 12:40:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.441 12:41:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.441 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.441 12:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:21.441 12:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:21.441 true 00:06:21.441 12:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:21.441 12:41:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.384 12:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.645 12:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:22.645 12:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:22.645 true 00:06:22.645 12:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:22.645 12:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.906 12:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.168 12:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:23.168 12:41:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:23.168 true 00:06:23.168 12:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:23.168 12:41:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.554 12:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:24.554 12:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:24.554 12:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:24.815 true 00:06:24.815 12:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:24.816 12:41:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.761 12:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.761 12:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:25.761 12:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:26.021 true 00:06:26.021 12:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:26.021 12:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.282 12:41:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.282 12:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:26.282 12:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:26.544 true 00:06:26.544 12:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:26.544 12:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.805 12:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.805 12:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:26.805 12:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:27.066 true 00:06:27.066 12:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:27.066 12:41:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.327 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.327 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:27.327 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:27.588 true 00:06:27.588 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:27.588 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.850 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.110 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:28.110 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:28.110 true 00:06:28.110 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:28.110 12:41:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.371 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.631 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:28.631 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:28.631 true 00:06:28.632 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:28.632 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.892 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.153 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:29.153 12:41:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:29.153 true 00:06:29.153 12:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:29.153 12:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.095 12:41:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.356 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:30.356 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:30.356 true 00:06:30.356 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:30.356 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.618 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.879 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:30.879 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:30.879 true 00:06:30.879 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:30.879 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.140 12:41:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.400 12:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:31.400 12:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:31.400 true 00:06:31.661 12:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:31.661 12:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.661 12:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.921 12:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:31.921 12:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:31.921 true 00:06:32.182 12:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:32.182 12:41:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.182 12:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.442 12:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:32.442 12:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:32.702 true 00:06:32.702 12:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:32.702 12:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.702 12:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.963 12:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:32.963 12:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:33.224 true 00:06:33.224 12:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:33.224 12:41:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.425 12:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.425 12:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:34.425 12:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:34.685 true 00:06:34.685 12:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:34.685 12:41:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.626 12:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.626 12:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:35.626 12:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:35.888 true 00:06:35.888 12:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:35.888 12:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.149 12:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.149 12:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:36.149 12:41:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:36.410 true 00:06:36.410 12:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:36.410 12:41:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.351 Initializing NVMe Controllers 00:06:37.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:37.351 Controller IO queue size 128, less than required. 00:06:37.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:37.351 Controller IO queue size 128, less than required. 00:06:37.351 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:37.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:37.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:37.351 Initialization complete. Launching workers. 00:06:37.351 ======================================================== 00:06:37.351 Latency(us) 00:06:37.351 Device Information : IOPS MiB/s Average min max 00:06:37.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2042.52 1.00 36151.83 1480.90 1045757.45 00:06:37.351 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16615.90 8.11 7703.21 1441.03 498128.02 00:06:37.351 ======================================================== 00:06:37.351 Total : 18658.42 9.11 10817.46 1441.03 1045757.45 00:06:37.351 00:06:37.612 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.612 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:37.612 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:37.872 true 00:06:37.872 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 403641 00:06:37.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (403641) - No such process 00:06:37.872 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 403641 00:06:37.872 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.133 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.133 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:38.133 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:38.133 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:38.133 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.133 12:41:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:38.393 null0 00:06:38.393 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.393 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.393 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:38.653 null1 00:06:38.653 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.653 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.653 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:38.653 null2 00:06:38.653 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.653 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.653 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:38.914 null3 00:06:38.914 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.914 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.914 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:39.174 null4 00:06:39.174 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.174 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.174 12:41:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:39.174 null5 00:06:39.438 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.438 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.438 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:39.438 null6 00:06:39.438 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.438 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.438 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:39.700 null7 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.700 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 410143 410144 410146 410148 410150 410152 410154 410156 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.701 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.963 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.963 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.963 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.963 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.964 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.225 12:41:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.225 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.225 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.225 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.225 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.225 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.225 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.225 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.486 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.747 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.008 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.269 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.269 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.269 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.269 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.269 12:41:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.269 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.269 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.269 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.269 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.269 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.269 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.270 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.531 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.791 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.791 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.791 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.792 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.053 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.315 12:41:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.315 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.316 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.578 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.840 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.101 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.102 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.102 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:43.102 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:43.102 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.102 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.102 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.102 12:41:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.362 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:43.624 rmmod nvme_tcp 00:06:43.624 rmmod nvme_fabrics 00:06:43.624 rmmod nvme_keyring 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 402952 ']' 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 402952 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 402952 ']' 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 402952 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 402952 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 402952' 00:06:43.624 killing process with pid 402952 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 402952 00:06:43.624 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 402952 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.885 12:41:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.803 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:45.803 00:06:45.803 real 0m50.070s 00:06:45.803 user 3m13.859s 00:06:45.803 sys 0m16.618s 00:06:45.803 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.803 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.803 ************************************ 00:06:45.803 END TEST nvmf_ns_hotplug_stress 00:06:45.803 ************************************ 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:46.064 ************************************ 00:06:46.064 START TEST nvmf_delete_subsystem 00:06:46.064 ************************************ 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:46.064 * Looking for test storage... 00:06:46.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:46.064 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.332 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:46.332 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.332 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.332 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.332 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:46.332 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.332 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.332 --rc genhtml_branch_coverage=1 00:06:46.332 --rc genhtml_function_coverage=1 00:06:46.332 --rc genhtml_legend=1 00:06:46.332 --rc geninfo_all_blocks=1 00:06:46.332 --rc geninfo_unexecuted_blocks=1 00:06:46.332 00:06:46.332 ' 00:06:46.332 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.332 --rc genhtml_branch_coverage=1 00:06:46.333 --rc genhtml_function_coverage=1 00:06:46.333 --rc genhtml_legend=1 00:06:46.333 --rc geninfo_all_blocks=1 00:06:46.333 --rc geninfo_unexecuted_blocks=1 00:06:46.333 00:06:46.333 ' 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.333 --rc genhtml_branch_coverage=1 00:06:46.333 --rc genhtml_function_coverage=1 00:06:46.333 --rc genhtml_legend=1 00:06:46.333 --rc geninfo_all_blocks=1 00:06:46.333 --rc geninfo_unexecuted_blocks=1 00:06:46.333 00:06:46.333 ' 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.333 --rc genhtml_branch_coverage=1 00:06:46.333 --rc genhtml_function_coverage=1 00:06:46.333 --rc genhtml_legend=1 00:06:46.333 --rc geninfo_all_blocks=1 00:06:46.333 --rc geninfo_unexecuted_blocks=1 00:06:46.333 00:06:46.333 ' 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.333 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.334 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.335 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.335 12:41:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:46.335 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:46.338 12:41:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:54.480 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:54.480 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:54.480 Found net devices under 0000:31:00.0: cvl_0_0 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:54.480 Found net devices under 0000:31:00.1: cvl_0_1 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:54.480 12:41:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:54.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:06:54.480 00:06:54.480 --- 10.0.0.2 ping statistics --- 00:06:54.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.480 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:06:54.480 00:06:54.480 --- 10.0.0.1 ping statistics --- 00:06:54.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.480 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=415956 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 415956 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 415956 ']' 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.480 12:41:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.743 [2024-11-25 12:41:34.425705] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:06:54.743 [2024-11-25 12:41:34.425774] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.743 [2024-11-25 12:41:34.520735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.743 [2024-11-25 12:41:34.560490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.743 [2024-11-25 12:41:34.560528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.743 [2024-11-25 12:41:34.560536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.743 [2024-11-25 12:41:34.560543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.743 [2024-11-25 12:41:34.560549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.743 [2024-11-25 12:41:34.561901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.743 [2024-11-25 12:41:34.561928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.684 [2024-11-25 12:41:35.283448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.684 [2024-11-25 12:41:35.299611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.684 NULL1 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.684 Delay0 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=416054 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:55.684 12:41:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:55.684 [2024-11-25 12:41:35.364358] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:57.593 12:41:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:57.593 12:41:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.593 12:41:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 starting I/O failed: -6 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 starting I/O failed: -6 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 starting I/O failed: -6 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 starting I/O failed: -6 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 starting I/O failed: -6 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 starting I/O failed: -6 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 starting I/O failed: -6 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 starting I/O failed: -6 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 starting I/O failed: -6 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.854 starting I/O failed: -6 00:06:57.854 Write completed with error (sct=0, sc=8) 00:06:57.854 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 starting I/O failed: -6 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 starting I/O failed: -6 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 [2024-11-25 12:41:37.569486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12132c0 is same with the state(6) to be set 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 starting I/O failed: -6 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 starting I/O failed: -6 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 starting I/O failed: -6 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 starting I/O failed: -6 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 starting I/O failed: -6 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 starting I/O failed: -6 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 starting I/O failed: -6 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 starting I/O failed: -6 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 starting I/O failed: -6 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 [2024-11-25 12:41:37.573625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4e3c000c70 is same with the state(6) to be set 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Write completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:57.855 Read completed with error (sct=0, sc=8) 00:06:58.796 [2024-11-25 12:41:38.542891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12145e0 is same with the state(6) to be set 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Write completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Write completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Write completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Write completed with error (sct=0, sc=8) 00:06:58.796 Write completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Write completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 [2024-11-25 12:41:38.573459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12130e0 is same with the state(6) to be set 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Read completed with error (sct=0, sc=8) 00:06:58.796 Write completed with error (sct=0, sc=8) 00:06:58.796 Write completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 [2024-11-25 12:41:38.573564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12134a0 is same with the state(6) to be set 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 [2024-11-25 12:41:38.574940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4e3c00d810 is same with the state(6) to be set 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 Read completed with error (sct=0, sc=8) 00:06:58.797 Write completed with error (sct=0, sc=8) 00:06:58.797 [2024-11-25 12:41:38.576869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4e3c00d050 is same with the state(6) to be set 00:06:58.797 Initializing NVMe Controllers 00:06:58.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:58.797 Controller IO queue size 128, less than required. 00:06:58.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:58.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:58.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:58.797 Initialization complete. Launching workers. 00:06:58.797 ======================================================== 00:06:58.797 Latency(us) 00:06:58.797 Device Information : IOPS MiB/s Average min max 00:06:58.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.24 0.09 879720.86 252.09 1007309.38 00:06:58.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.85 0.07 942349.77 195.45 1010437.94 00:06:58.797 ======================================================== 00:06:58.797 Total : 328.09 0.16 908516.85 195.45 1010437.94 00:06:58.797 00:06:58.797 [2024-11-25 12:41:38.577423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12145e0 (9): Bad file descriptor 00:06:58.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:58.797 12:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.797 12:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:58.797 12:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 416054 00:06:58.797 12:41:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 416054 00:06:59.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (416054) - No such process 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 416054 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 416054 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 416054 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.369 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.370 [2024-11-25 12:41:39.108232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=416803 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416803 00:06:59.370 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.370 [2024-11-25 12:41:39.185837] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:00.045 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.045 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416803 00:07:00.045 12:41:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.317 12:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.317 12:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416803 00:07:00.317 12:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.888 12:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.888 12:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416803 00:07:00.888 12:41:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.460 12:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.460 12:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416803 00:07:01.460 12:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.032 12:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.032 12:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416803 00:07:02.032 12:41:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.293 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.293 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416803 00:07:02.293 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.554 Initializing NVMe Controllers 00:07:02.554 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.554 Controller IO queue size 128, less than required. 00:07:02.554 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:02.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:02.554 Initialization complete. Launching workers. 00:07:02.554 ======================================================== 00:07:02.554 Latency(us) 00:07:02.554 Device Information : IOPS MiB/s Average min max 00:07:02.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002141.99 1000177.47 1006104.27 00:07:02.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002880.47 1000257.19 1009395.00 00:07:02.554 ======================================================== 00:07:02.554 Total : 256.00 0.12 1002511.23 1000177.47 1009395.00 00:07:02.554 00:07:02.815 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.815 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416803 00:07:02.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (416803) - No such process 00:07:02.815 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 416803 00:07:02.815 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:02.815 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:02.815 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:02.815 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:02.815 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:02.815 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:02.815 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:02.815 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:02.815 rmmod nvme_tcp 00:07:02.815 rmmod nvme_fabrics 00:07:02.815 rmmod nvme_keyring 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 415956 ']' 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 415956 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 415956 ']' 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 415956 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 415956 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 415956' 00:07:03.077 killing process with pid 415956 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 415956 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 415956 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.077 12:41:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:05.625 00:07:05.625 real 0m19.241s 00:07:05.625 user 0m31.144s 00:07:05.625 sys 0m7.360s 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.625 ************************************ 00:07:05.625 END TEST nvmf_delete_subsystem 00:07:05.625 ************************************ 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.625 ************************************ 00:07:05.625 START TEST nvmf_host_management 00:07:05.625 ************************************ 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:05.625 * Looking for test storage... 00:07:05.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:05.625 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.626 --rc genhtml_branch_coverage=1 00:07:05.626 --rc genhtml_function_coverage=1 00:07:05.626 --rc genhtml_legend=1 00:07:05.626 --rc geninfo_all_blocks=1 00:07:05.626 --rc geninfo_unexecuted_blocks=1 00:07:05.626 00:07:05.626 ' 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.626 --rc genhtml_branch_coverage=1 00:07:05.626 --rc genhtml_function_coverage=1 00:07:05.626 --rc genhtml_legend=1 00:07:05.626 --rc geninfo_all_blocks=1 00:07:05.626 --rc geninfo_unexecuted_blocks=1 00:07:05.626 00:07:05.626 ' 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.626 --rc genhtml_branch_coverage=1 00:07:05.626 --rc genhtml_function_coverage=1 00:07:05.626 --rc genhtml_legend=1 00:07:05.626 --rc geninfo_all_blocks=1 00:07:05.626 --rc geninfo_unexecuted_blocks=1 00:07:05.626 00:07:05.626 ' 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.626 --rc genhtml_branch_coverage=1 00:07:05.626 --rc genhtml_function_coverage=1 00:07:05.626 --rc genhtml_legend=1 00:07:05.626 --rc geninfo_all_blocks=1 00:07:05.626 --rc geninfo_unexecuted_blocks=1 00:07:05.626 00:07:05.626 ' 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:05.626 12:41:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.770 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:13.771 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:13.771 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:13.771 Found net devices under 0000:31:00.0: cvl_0_0 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:13.771 Found net devices under 0000:31:00.1: cvl_0_1 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:13.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:07:13.771 00:07:13.771 --- 10.0.0.2 ping statistics --- 00:07:13.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.771 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:07:13.771 00:07:13.771 --- 10.0.0.1 ping statistics --- 00:07:13.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.771 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:13.771 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:13.772 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.772 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:13.772 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=422449 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 422449 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 422449 ']' 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.033 12:41:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.033 [2024-11-25 12:41:53.742305] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:07:14.033 [2024-11-25 12:41:53.742353] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.033 [2024-11-25 12:41:53.846168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.033 [2024-11-25 12:41:53.891631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.033 [2024-11-25 12:41:53.891685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.033 [2024-11-25 12:41:53.891694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.033 [2024-11-25 12:41:53.891701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.033 [2024-11-25 12:41:53.891707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.033 [2024-11-25 12:41:53.893906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.033 [2024-11-25 12:41:53.894092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.033 [2024-11-25 12:41:53.894257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:14.033 [2024-11-25 12:41:53.894258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.974 [2024-11-25 12:41:54.590437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.974 Malloc0 00:07:14.974 [2024-11-25 12:41:54.660190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=422522 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 422522 /var/tmp/bdevperf.sock 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 422522 ']' 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:14.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:14.974 { 00:07:14.974 "params": { 00:07:14.974 "name": "Nvme$subsystem", 00:07:14.974 "trtype": "$TEST_TRANSPORT", 00:07:14.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:14.974 "adrfam": "ipv4", 00:07:14.974 "trsvcid": "$NVMF_PORT", 00:07:14.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:14.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:14.974 "hdgst": ${hdgst:-false}, 00:07:14.974 "ddgst": ${ddgst:-false} 00:07:14.974 }, 00:07:14.974 "method": "bdev_nvme_attach_controller" 00:07:14.974 } 00:07:14.974 EOF 00:07:14.974 )") 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:14.974 12:41:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:14.974 "params": { 00:07:14.974 "name": "Nvme0", 00:07:14.975 "trtype": "tcp", 00:07:14.975 "traddr": "10.0.0.2", 00:07:14.975 "adrfam": "ipv4", 00:07:14.975 "trsvcid": "4420", 00:07:14.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.975 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:14.975 "hdgst": false, 00:07:14.975 "ddgst": false 00:07:14.975 }, 00:07:14.975 "method": "bdev_nvme_attach_controller" 00:07:14.975 }' 00:07:14.975 [2024-11-25 12:41:54.765035] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:07:14.975 [2024-11-25 12:41:54.765089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422522 ] 00:07:14.975 [2024-11-25 12:41:54.843316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.235 [2024-11-25 12:41:54.879718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.235 Running I/O for 10 seconds... 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.808 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.808 [2024-11-25 12:41:55.627218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.808 [2024-11-25 12:41:55.627476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.627702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30630 is same with the state(6) to be set 00:07:15.809 [2024-11-25 12:41:55.628167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.809 [2024-11-25 12:41:55.628614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.809 [2024-11-25 12:41:55.628624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.628991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.628999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.810 [2024-11-25 12:41:55.629290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.810 [2024-11-25 12:41:55.629299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.811 [2024-11-25 12:41:55.629306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:15.811 [2024-11-25 12:41:55.629315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbdbb0 is same with the state(6) to be set 00:07:15.811 [2024-11-25 12:41:55.630579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:15.811 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.811 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:15.811 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.811 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.811 task offset: 114688 on job bdev=Nvme0n1 fails 00:07:15.811 00:07:15.811 Latency(us) 00:07:15.811 [2024-11-25T11:41:55.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.811 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:15.811 Job: Nvme0n1 ended in about 0.60 seconds with error 00:07:15.811 Verification LBA range: start 0x0 length 0x400 00:07:15.811 Nvme0n1 : 0.60 1484.66 92.79 106.05 0.00 39315.94 7700.48 36481.71 00:07:15.811 [2024-11-25T11:41:55.714Z] =================================================================================================================== 00:07:15.811 [2024-11-25T11:41:55.714Z] Total : 1484.66 92.79 106.05 0.00 39315.94 7700.48 36481.71 00:07:15.811 [2024-11-25 12:41:55.632875] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.811 [2024-11-25 12:41:55.632909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cad040 (9): Bad file descriptor 00:07:15.811 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.811 12:41:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:16.071 [2024-11-25 12:41:55.777094] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 422522 00:07:17.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (422522) - No such process 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:17.013 { 00:07:17.013 "params": { 00:07:17.013 "name": "Nvme$subsystem", 00:07:17.013 "trtype": "$TEST_TRANSPORT", 00:07:17.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.013 "adrfam": "ipv4", 00:07:17.013 "trsvcid": "$NVMF_PORT", 00:07:17.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.013 "hdgst": ${hdgst:-false}, 00:07:17.013 "ddgst": ${ddgst:-false} 00:07:17.013 }, 00:07:17.013 "method": "bdev_nvme_attach_controller" 00:07:17.013 } 00:07:17.013 EOF 00:07:17.013 )") 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:17.013 12:41:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:17.013 "params": { 00:07:17.013 "name": "Nvme0", 00:07:17.013 "trtype": "tcp", 00:07:17.013 "traddr": "10.0.0.2", 00:07:17.013 "adrfam": "ipv4", 00:07:17.013 "trsvcid": "4420", 00:07:17.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:17.013 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:17.013 "hdgst": false, 00:07:17.013 "ddgst": false 00:07:17.013 }, 00:07:17.013 "method": "bdev_nvme_attach_controller" 00:07:17.013 }' 00:07:17.013 [2024-11-25 12:41:56.695288] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:07:17.013 [2024-11-25 12:41:56.695340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422966 ] 00:07:17.013 [2024-11-25 12:41:56.773513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.013 [2024-11-25 12:41:56.808987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.274 Running I/O for 1 seconds... 00:07:18.214 1605.00 IOPS, 100.31 MiB/s 00:07:18.215 Latency(us) 00:07:18.215 [2024-11-25T11:41:58.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.215 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:18.215 Verification LBA range: start 0x0 length 0x400 00:07:18.215 Nvme0n1 : 1.01 1654.57 103.41 0.00 0.00 37785.83 1658.88 43253.76 00:07:18.215 [2024-11-25T11:41:58.118Z] =================================================================================================================== 00:07:18.215 [2024-11-25T11:41:58.118Z] Total : 1654.57 103.41 0.00 0.00 37785.83 1658.88 43253.76 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:18.476 rmmod nvme_tcp 00:07:18.476 rmmod nvme_fabrics 00:07:18.476 rmmod nvme_keyring 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 422449 ']' 00:07:18.476 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 422449 00:07:18.477 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 422449 ']' 00:07:18.477 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 422449 00:07:18.477 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:18.477 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.477 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 422449 00:07:18.477 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:18.477 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:18.477 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 422449' 00:07:18.477 killing process with pid 422449 00:07:18.477 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 422449 00:07:18.477 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 422449 00:07:18.737 [2024-11-25 12:41:58.448628] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.737 12:41:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.653 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:20.914 00:07:20.914 real 0m15.461s 00:07:20.914 user 0m23.426s 00:07:20.914 sys 0m7.221s 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.914 ************************************ 00:07:20.914 END TEST nvmf_host_management 00:07:20.914 ************************************ 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.914 ************************************ 00:07:20.914 START TEST nvmf_lvol 00:07:20.914 ************************************ 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:20.914 * Looking for test storage... 00:07:20.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.914 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.176 --rc genhtml_branch_coverage=1 00:07:21.176 --rc genhtml_function_coverage=1 00:07:21.176 --rc genhtml_legend=1 00:07:21.176 --rc geninfo_all_blocks=1 00:07:21.176 --rc geninfo_unexecuted_blocks=1 00:07:21.176 00:07:21.176 ' 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.176 --rc genhtml_branch_coverage=1 00:07:21.176 --rc genhtml_function_coverage=1 00:07:21.176 --rc genhtml_legend=1 00:07:21.176 --rc geninfo_all_blocks=1 00:07:21.176 --rc geninfo_unexecuted_blocks=1 00:07:21.176 00:07:21.176 ' 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.176 --rc genhtml_branch_coverage=1 00:07:21.176 --rc genhtml_function_coverage=1 00:07:21.176 --rc genhtml_legend=1 00:07:21.176 --rc geninfo_all_blocks=1 00:07:21.176 --rc geninfo_unexecuted_blocks=1 00:07:21.176 00:07:21.176 ' 00:07:21.176 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.176 --rc genhtml_branch_coverage=1 00:07:21.176 --rc genhtml_function_coverage=1 00:07:21.176 --rc genhtml_legend=1 00:07:21.176 --rc geninfo_all_blocks=1 00:07:21.176 --rc geninfo_unexecuted_blocks=1 00:07:21.176 00:07:21.176 ' 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.177 12:42:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:29.319 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:29.320 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:29.320 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:29.320 Found net devices under 0000:31:00.0: cvl_0_0 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:29.320 Found net devices under 0000:31:00.1: cvl_0_1 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.320 12:42:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.320 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.320 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.320 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:29.320 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.320 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.320 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.320 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:29.320 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:29.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:07:29.320 00:07:29.320 --- 10.0.0.2 ping statistics --- 00:07:29.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.320 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:07:29.320 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:07:29.580 00:07:29.581 --- 10.0.0.1 ping statistics --- 00:07:29.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.581 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=428542 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 428542 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 428542 ']' 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.581 12:42:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.581 [2024-11-25 12:42:09.339861] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:07:29.581 [2024-11-25 12:42:09.339935] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.581 [2024-11-25 12:42:09.431188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.581 [2024-11-25 12:42:09.473132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.581 [2024-11-25 12:42:09.473167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.581 [2024-11-25 12:42:09.473174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.581 [2024-11-25 12:42:09.473181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.581 [2024-11-25 12:42:09.473191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.581 [2024-11-25 12:42:09.474606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.581 [2024-11-25 12:42:09.474722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.581 [2024-11-25 12:42:09.474725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.524 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.525 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:30.525 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.525 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.525 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.525 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.525 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:30.525 [2024-11-25 12:42:10.341813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.525 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.785 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:30.785 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:31.047 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:31.047 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:31.047 12:42:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:31.309 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=186978ff-bca2-45ae-adcc-b421a3506a8d 00:07:31.309 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 186978ff-bca2-45ae-adcc-b421a3506a8d lvol 20 00:07:31.570 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=637e895f-16fb-4d82-b8b1-ddda82951205 00:07:31.570 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.830 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 637e895f-16fb-4d82-b8b1-ddda82951205 00:07:31.830 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:32.092 [2024-11-25 12:42:11.824262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.092 12:42:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.353 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=429194 00:07:32.353 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:32.353 12:42:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:33.296 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 637e895f-16fb-4d82-b8b1-ddda82951205 MY_SNAPSHOT 00:07:33.557 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9fa19278-0b16-4980-9792-8124466c7ba9 00:07:33.557 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 637e895f-16fb-4d82-b8b1-ddda82951205 30 00:07:33.817 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9fa19278-0b16-4980-9792-8124466c7ba9 MY_CLONE 00:07:33.817 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6724e457-968d-449a-85d2-25ab5455f180 00:07:33.817 12:42:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6724e457-968d-449a-85d2-25ab5455f180 00:07:34.389 12:42:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 429194 00:07:44.398 Initializing NVMe Controllers 00:07:44.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:44.398 Controller IO queue size 128, less than required. 00:07:44.398 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:44.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:44.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:44.398 Initialization complete. Launching workers. 00:07:44.398 ======================================================== 00:07:44.398 Latency(us) 00:07:44.398 Device Information : IOPS MiB/s Average min max 00:07:44.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17777.70 69.44 7201.51 1131.60 50975.63 00:07:44.398 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12239.40 47.81 10464.00 3820.82 57818.37 00:07:44.398 ======================================================== 00:07:44.398 Total : 30017.10 117.25 8531.78 1131.60 57818.37 00:07:44.398 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 637e895f-16fb-4d82-b8b1-ddda82951205 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 186978ff-bca2-45ae-adcc-b421a3506a8d 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:44.398 12:42:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:44.398 rmmod nvme_tcp 00:07:44.398 rmmod nvme_fabrics 00:07:44.398 rmmod nvme_keyring 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 428542 ']' 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 428542 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 428542 ']' 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 428542 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428542 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428542' 00:07:44.398 killing process with pid 428542 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 428542 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 428542 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.398 12:42:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.788 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.788 00:07:45.789 real 0m24.702s 00:07:45.789 user 1m4.564s 00:07:45.789 sys 0m9.240s 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.789 ************************************ 00:07:45.789 END TEST nvmf_lvol 00:07:45.789 ************************************ 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.789 ************************************ 00:07:45.789 START TEST nvmf_lvs_grow 00:07:45.789 ************************************ 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.789 * Looking for test storage... 00:07:45.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.789 --rc genhtml_branch_coverage=1 00:07:45.789 --rc genhtml_function_coverage=1 00:07:45.789 --rc genhtml_legend=1 00:07:45.789 --rc geninfo_all_blocks=1 00:07:45.789 --rc geninfo_unexecuted_blocks=1 00:07:45.789 00:07:45.789 ' 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.789 --rc genhtml_branch_coverage=1 00:07:45.789 --rc genhtml_function_coverage=1 00:07:45.789 --rc genhtml_legend=1 00:07:45.789 --rc geninfo_all_blocks=1 00:07:45.789 --rc geninfo_unexecuted_blocks=1 00:07:45.789 00:07:45.789 ' 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.789 --rc genhtml_branch_coverage=1 00:07:45.789 --rc genhtml_function_coverage=1 00:07:45.789 --rc genhtml_legend=1 00:07:45.789 --rc geninfo_all_blocks=1 00:07:45.789 --rc geninfo_unexecuted_blocks=1 00:07:45.789 00:07:45.789 ' 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.789 --rc genhtml_branch_coverage=1 00:07:45.789 --rc genhtml_function_coverage=1 00:07:45.789 --rc genhtml_legend=1 00:07:45.789 --rc geninfo_all_blocks=1 00:07:45.789 --rc geninfo_unexecuted_blocks=1 00:07:45.789 00:07:45.789 ' 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.789 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.790 12:42:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.933 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.933 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.933 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:53.934 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:53.934 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:53.934 Found net devices under 0000:31:00.0: cvl_0_0 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:53.934 Found net devices under 0000:31:00.1: cvl_0_1 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.934 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:54.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:07:54.196 00:07:54.196 --- 10.0.0.2 ping statistics --- 00:07:54.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.196 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:07:54.196 00:07:54.196 --- 10.0.0.1 ping statistics --- 00:07:54.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.196 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:54.196 12:42:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=436201 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 436201 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 436201 ']' 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.196 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.196 [2024-11-25 12:42:34.072770] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:07:54.196 [2024-11-25 12:42:34.072836] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.457 [2024-11-25 12:42:34.163046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.457 [2024-11-25 12:42:34.203462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.457 [2024-11-25 12:42:34.203499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.457 [2024-11-25 12:42:34.203506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.457 [2024-11-25 12:42:34.203513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.457 [2024-11-25 12:42:34.203519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.457 [2024-11-25 12:42:34.204109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.030 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.030 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:55.030 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.030 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:55.030 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.030 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.030 12:42:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:55.291 [2024-11-25 12:42:35.061555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.292 ************************************ 00:07:55.292 START TEST lvs_grow_clean 00:07:55.292 ************************************ 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.292 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:55.553 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:55.553 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:55.814 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9d073d43-4110-4066-b7be-69bf53ac980c 00:07:55.814 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d073d43-4110-4066-b7be-69bf53ac980c 00:07:55.814 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:55.814 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:55.814 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:55.815 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d073d43-4110-4066-b7be-69bf53ac980c lvol 150 00:07:56.076 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c4b3d6de-e532-46f6-a394-aff2cda3ebe1 00:07:56.076 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:56.076 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:56.337 [2024-11-25 12:42:35.980608] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:56.337 [2024-11-25 12:42:35.980660] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:56.337 true 00:07:56.337 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d073d43-4110-4066-b7be-69bf53ac980c 00:07:56.337 12:42:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:56.337 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:56.337 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:56.598 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c4b3d6de-e532-46f6-a394-aff2cda3ebe1 00:07:56.859 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:56.859 [2024-11-25 12:42:36.654666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.859 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.120 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=436887 00:07:57.120 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:57.120 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:57.120 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 436887 /var/tmp/bdevperf.sock 00:07:57.120 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 436887 ']' 00:07:57.120 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:57.120 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.120 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:57.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:57.120 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.120 12:42:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:57.120 [2024-11-25 12:42:36.882621] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:07:57.120 [2024-11-25 12:42:36.882672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436887 ] 00:07:57.120 [2024-11-25 12:42:36.976152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.120 [2024-11-25 12:42:37.012128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.060 12:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.060 12:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:58.060 12:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:58.060 Nvme0n1 00:07:58.060 12:42:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:58.321 [ 00:07:58.321 { 00:07:58.321 "name": "Nvme0n1", 00:07:58.321 "aliases": [ 00:07:58.321 "c4b3d6de-e532-46f6-a394-aff2cda3ebe1" 00:07:58.321 ], 00:07:58.321 "product_name": "NVMe disk", 00:07:58.321 "block_size": 4096, 00:07:58.321 "num_blocks": 38912, 00:07:58.321 "uuid": "c4b3d6de-e532-46f6-a394-aff2cda3ebe1", 00:07:58.321 "numa_id": 0, 00:07:58.321 "assigned_rate_limits": { 00:07:58.321 "rw_ios_per_sec": 0, 00:07:58.321 "rw_mbytes_per_sec": 0, 00:07:58.321 "r_mbytes_per_sec": 0, 00:07:58.321 "w_mbytes_per_sec": 0 00:07:58.321 }, 00:07:58.321 "claimed": false, 00:07:58.321 "zoned": false, 00:07:58.321 "supported_io_types": { 00:07:58.321 "read": true, 00:07:58.321 "write": true, 00:07:58.321 "unmap": true, 00:07:58.321 "flush": true, 00:07:58.321 "reset": true, 00:07:58.321 "nvme_admin": true, 00:07:58.321 "nvme_io": true, 00:07:58.321 "nvme_io_md": false, 00:07:58.321 "write_zeroes": true, 00:07:58.321 "zcopy": false, 00:07:58.321 "get_zone_info": false, 00:07:58.321 "zone_management": false, 00:07:58.321 "zone_append": false, 00:07:58.321 "compare": true, 00:07:58.321 "compare_and_write": true, 00:07:58.321 "abort": true, 00:07:58.321 "seek_hole": false, 00:07:58.321 "seek_data": false, 00:07:58.321 "copy": true, 00:07:58.321 "nvme_iov_md": false 00:07:58.321 }, 00:07:58.321 "memory_domains": [ 00:07:58.321 { 00:07:58.321 "dma_device_id": "system", 00:07:58.321 "dma_device_type": 1 00:07:58.321 } 00:07:58.321 ], 00:07:58.321 "driver_specific": { 00:07:58.321 "nvme": [ 00:07:58.321 { 00:07:58.321 "trid": { 00:07:58.321 "trtype": "TCP", 00:07:58.321 "adrfam": "IPv4", 00:07:58.321 "traddr": "10.0.0.2", 00:07:58.321 "trsvcid": "4420", 00:07:58.321 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:58.321 }, 00:07:58.321 "ctrlr_data": { 00:07:58.321 "cntlid": 1, 00:07:58.321 "vendor_id": "0x8086", 00:07:58.321 "model_number": "SPDK bdev Controller", 00:07:58.321 "serial_number": "SPDK0", 00:07:58.321 "firmware_revision": "25.01", 00:07:58.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:58.321 "oacs": { 00:07:58.321 "security": 0, 00:07:58.321 "format": 0, 00:07:58.321 "firmware": 0, 00:07:58.321 "ns_manage": 0 00:07:58.321 }, 00:07:58.321 "multi_ctrlr": true, 00:07:58.321 "ana_reporting": false 00:07:58.321 }, 00:07:58.321 "vs": { 00:07:58.322 "nvme_version": "1.3" 00:07:58.322 }, 00:07:58.322 "ns_data": { 00:07:58.322 "id": 1, 00:07:58.322 "can_share": true 00:07:58.322 } 00:07:58.322 } 00:07:58.322 ], 00:07:58.322 "mp_policy": "active_passive" 00:07:58.322 } 00:07:58.322 } 00:07:58.322 ] 00:07:58.322 12:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:58.322 12:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=437022 00:07:58.322 12:42:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:58.322 Running I/O for 10 seconds... 00:07:59.708 Latency(us) 00:07:59.708 [2024-11-25T11:42:39.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.708 Nvme0n1 : 1.00 17853.00 69.74 0.00 0.00 0.00 0.00 0.00 00:07:59.708 [2024-11-25T11:42:39.611Z] =================================================================================================================== 00:07:59.708 [2024-11-25T11:42:39.611Z] Total : 17853.00 69.74 0.00 0.00 0.00 0.00 0.00 00:07:59.708 00:08:00.278 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9d073d43-4110-4066-b7be-69bf53ac980c 00:08:00.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.538 Nvme0n1 : 2.00 17942.50 70.09 0.00 0.00 0.00 0.00 0.00 00:08:00.538 [2024-11-25T11:42:40.441Z] =================================================================================================================== 00:08:00.538 [2024-11-25T11:42:40.441Z] Total : 17942.50 70.09 0.00 0.00 0.00 0.00 0.00 00:08:00.538 00:08:00.538 true 00:08:00.538 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d073d43-4110-4066-b7be-69bf53ac980c 00:08:00.538 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:00.798 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:00.798 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:00.798 12:42:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 437022 00:08:01.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.367 Nvme0n1 : 3.00 17993.67 70.29 0.00 0.00 0.00 0.00 0.00 00:08:01.367 [2024-11-25T11:42:41.270Z] =================================================================================================================== 00:08:01.367 [2024-11-25T11:42:41.270Z] Total : 17993.67 70.29 0.00 0.00 0.00 0.00 0.00 00:08:01.367 00:08:02.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.340 Nvme0n1 : 4.00 18012.50 70.36 0.00 0.00 0.00 0.00 0.00 00:08:02.340 [2024-11-25T11:42:42.243Z] =================================================================================================================== 00:08:02.340 [2024-11-25T11:42:42.243Z] Total : 18012.50 70.36 0.00 0.00 0.00 0.00 0.00 00:08:02.340 00:08:03.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.420 Nvme0n1 : 5.00 18043.80 70.48 0.00 0.00 0.00 0.00 0.00 00:08:03.420 [2024-11-25T11:42:43.323Z] =================================================================================================================== 00:08:03.420 [2024-11-25T11:42:43.323Z] Total : 18043.80 70.48 0.00 0.00 0.00 0.00 0.00 00:08:03.420 00:08:04.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.364 Nvme0n1 : 6.00 18075.83 70.61 0.00 0.00 0.00 0.00 0.00 00:08:04.364 [2024-11-25T11:42:44.267Z] =================================================================================================================== 00:08:04.364 [2024-11-25T11:42:44.267Z] Total : 18075.83 70.61 0.00 0.00 0.00 0.00 0.00 00:08:04.364 00:08:05.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.751 Nvme0n1 : 7.00 18090.00 70.66 0.00 0.00 0.00 0.00 0.00 00:08:05.751 [2024-11-25T11:42:45.654Z] =================================================================================================================== 00:08:05.751 [2024-11-25T11:42:45.654Z] Total : 18090.00 70.66 0.00 0.00 0.00 0.00 0.00 00:08:05.751 00:08:06.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.322 Nvme0n1 : 8.00 18101.50 70.71 0.00 0.00 0.00 0.00 0.00 00:08:06.322 [2024-11-25T11:42:46.225Z] =================================================================================================================== 00:08:06.322 [2024-11-25T11:42:46.226Z] Total : 18101.50 70.71 0.00 0.00 0.00 0.00 0.00 00:08:06.323 00:08:07.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.707 Nvme0n1 : 9.00 18117.11 70.77 0.00 0.00 0.00 0.00 0.00 00:08:07.707 [2024-11-25T11:42:47.610Z] =================================================================================================================== 00:08:07.707 [2024-11-25T11:42:47.610Z] Total : 18117.11 70.77 0.00 0.00 0.00 0.00 0.00 00:08:07.707 00:08:08.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.649 Nvme0n1 : 10.00 18126.20 70.81 0.00 0.00 0.00 0.00 0.00 00:08:08.649 [2024-11-25T11:42:48.552Z] =================================================================================================================== 00:08:08.649 [2024-11-25T11:42:48.552Z] Total : 18126.20 70.81 0.00 0.00 0.00 0.00 0.00 00:08:08.649 00:08:08.649 00:08:08.649 Latency(us) 00:08:08.649 [2024-11-25T11:42:48.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.649 Nvme0n1 : 10.00 18124.36 70.80 0.00 0.00 7058.79 2075.31 12724.91 00:08:08.649 [2024-11-25T11:42:48.552Z] =================================================================================================================== 00:08:08.649 [2024-11-25T11:42:48.552Z] Total : 18124.36 70.80 0.00 0.00 7058.79 2075.31 12724.91 00:08:08.649 { 00:08:08.649 "results": [ 00:08:08.649 { 00:08:08.649 "job": "Nvme0n1", 00:08:08.649 "core_mask": "0x2", 00:08:08.649 "workload": "randwrite", 00:08:08.649 "status": "finished", 00:08:08.649 "queue_depth": 128, 00:08:08.649 "io_size": 4096, 00:08:08.649 "runtime": 10.004491, 00:08:08.649 "iops": 18124.36034976692, 00:08:08.649 "mibps": 70.79828261627704, 00:08:08.649 "io_failed": 0, 00:08:08.649 "io_timeout": 0, 00:08:08.649 "avg_latency_us": 7058.792458918148, 00:08:08.649 "min_latency_us": 2075.306666666667, 00:08:08.649 "max_latency_us": 12724.906666666666 00:08:08.649 } 00:08:08.649 ], 00:08:08.649 "core_count": 1 00:08:08.649 } 00:08:08.649 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 436887 00:08:08.649 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 436887 ']' 00:08:08.649 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 436887 00:08:08.650 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:08.650 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.650 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 436887 00:08:08.650 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:08.650 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:08.650 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 436887' 00:08:08.650 killing process with pid 436887 00:08:08.650 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 436887 00:08:08.650 Received shutdown signal, test time was about 10.000000 seconds 00:08:08.650 00:08:08.650 Latency(us) 00:08:08.650 [2024-11-25T11:42:48.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.650 [2024-11-25T11:42:48.553Z] =================================================================================================================== 00:08:08.650 [2024-11-25T11:42:48.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:08.650 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 436887 00:08:08.650 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.911 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:08.911 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d073d43-4110-4066-b7be-69bf53ac980c 00:08:08.911 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:09.171 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:09.171 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:09.171 12:42:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:09.433 [2024-11-25 12:42:49.110429] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d073d43-4110-4066-b7be-69bf53ac980c 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d073d43-4110-4066-b7be-69bf53ac980c 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d073d43-4110-4066-b7be-69bf53ac980c 00:08:09.433 request: 00:08:09.433 { 00:08:09.433 "uuid": "9d073d43-4110-4066-b7be-69bf53ac980c", 00:08:09.433 "method": "bdev_lvol_get_lvstores", 00:08:09.433 "req_id": 1 00:08:09.433 } 00:08:09.433 Got JSON-RPC error response 00:08:09.433 response: 00:08:09.433 { 00:08:09.433 "code": -19, 00:08:09.433 "message": "No such device" 00:08:09.433 } 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.433 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.694 aio_bdev 00:08:09.694 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c4b3d6de-e532-46f6-a394-aff2cda3ebe1 00:08:09.694 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c4b3d6de-e532-46f6-a394-aff2cda3ebe1 00:08:09.694 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.694 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:09.694 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.694 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.694 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:09.955 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c4b3d6de-e532-46f6-a394-aff2cda3ebe1 -t 2000 00:08:09.955 [ 00:08:09.955 { 00:08:09.955 "name": "c4b3d6de-e532-46f6-a394-aff2cda3ebe1", 00:08:09.955 "aliases": [ 00:08:09.955 "lvs/lvol" 00:08:09.955 ], 00:08:09.955 "product_name": "Logical Volume", 00:08:09.955 "block_size": 4096, 00:08:09.955 "num_blocks": 38912, 00:08:09.955 "uuid": "c4b3d6de-e532-46f6-a394-aff2cda3ebe1", 00:08:09.955 "assigned_rate_limits": { 00:08:09.955 "rw_ios_per_sec": 0, 00:08:09.955 "rw_mbytes_per_sec": 0, 00:08:09.955 "r_mbytes_per_sec": 0, 00:08:09.955 "w_mbytes_per_sec": 0 00:08:09.955 }, 00:08:09.955 "claimed": false, 00:08:09.955 "zoned": false, 00:08:09.955 "supported_io_types": { 00:08:09.955 "read": true, 00:08:09.955 "write": true, 00:08:09.955 "unmap": true, 00:08:09.955 "flush": false, 00:08:09.955 "reset": true, 00:08:09.955 "nvme_admin": false, 00:08:09.955 "nvme_io": false, 00:08:09.955 "nvme_io_md": false, 00:08:09.955 "write_zeroes": true, 00:08:09.955 "zcopy": false, 00:08:09.955 "get_zone_info": false, 00:08:09.955 "zone_management": false, 00:08:09.955 "zone_append": false, 00:08:09.955 "compare": false, 00:08:09.955 "compare_and_write": false, 00:08:09.955 "abort": false, 00:08:09.955 "seek_hole": true, 00:08:09.955 "seek_data": true, 00:08:09.955 "copy": false, 00:08:09.955 "nvme_iov_md": false 00:08:09.955 }, 00:08:09.955 "driver_specific": { 00:08:09.955 "lvol": { 00:08:09.955 "lvol_store_uuid": "9d073d43-4110-4066-b7be-69bf53ac980c", 00:08:09.955 "base_bdev": "aio_bdev", 00:08:09.955 "thin_provision": false, 00:08:09.955 "num_allocated_clusters": 38, 00:08:09.955 "snapshot": false, 00:08:09.955 "clone": false, 00:08:09.955 "esnap_clone": false 00:08:09.955 } 00:08:09.955 } 00:08:09.955 } 00:08:09.955 ] 00:08:09.955 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:09.955 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d073d43-4110-4066-b7be-69bf53ac980c 00:08:09.955 12:42:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:10.217 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:10.217 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d073d43-4110-4066-b7be-69bf53ac980c 00:08:10.217 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:10.478 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:10.478 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c4b3d6de-e532-46f6-a394-aff2cda3ebe1 00:08:10.478 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d073d43-4110-4066-b7be-69bf53ac980c 00:08:10.739 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.000 00:08:11.000 real 0m15.643s 00:08:11.000 user 0m15.361s 00:08:11.000 sys 0m1.341s 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:11.000 ************************************ 00:08:11.000 END TEST lvs_grow_clean 00:08:11.000 ************************************ 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.000 ************************************ 00:08:11.000 START TEST lvs_grow_dirty 00:08:11.000 ************************************ 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.000 12:42:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.261 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:11.261 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:11.522 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:11.522 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:11.522 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:11.522 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:11.522 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:11.522 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a698c791-afd5-4437-83b9-2a3a1d9df337 lvol 150 00:08:11.783 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0 00:08:11.783 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.783 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:12.044 [2024-11-25 12:42:51.707553] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:12.044 [2024-11-25 12:42:51.707604] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:12.044 true 00:08:12.044 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:12.044 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:12.044 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:12.044 12:42:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:12.304 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0 00:08:12.564 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:12.564 [2024-11-25 12:42:52.381654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.564 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.824 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=440014 00:08:12.824 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.825 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:12.825 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 440014 /var/tmp/bdevperf.sock 00:08:12.825 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 440014 ']' 00:08:12.825 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.825 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.825 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.825 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.825 12:42:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:12.825 [2024-11-25 12:42:52.596939] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:08:12.825 [2024-11-25 12:42:52.596991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440014 ] 00:08:12.825 [2024-11-25 12:42:52.689839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.825 [2024-11-25 12:42:52.726764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.766 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.766 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:13.766 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:14.026 Nvme0n1 00:08:14.026 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:14.286 [ 00:08:14.286 { 00:08:14.286 "name": "Nvme0n1", 00:08:14.286 "aliases": [ 00:08:14.286 "f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0" 00:08:14.286 ], 00:08:14.286 "product_name": "NVMe disk", 00:08:14.286 "block_size": 4096, 00:08:14.286 "num_blocks": 38912, 00:08:14.286 "uuid": "f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0", 00:08:14.286 "numa_id": 0, 00:08:14.286 "assigned_rate_limits": { 00:08:14.286 "rw_ios_per_sec": 0, 00:08:14.286 "rw_mbytes_per_sec": 0, 00:08:14.286 "r_mbytes_per_sec": 0, 00:08:14.287 "w_mbytes_per_sec": 0 00:08:14.287 }, 00:08:14.287 "claimed": false, 00:08:14.287 "zoned": false, 00:08:14.287 "supported_io_types": { 00:08:14.287 "read": true, 00:08:14.287 "write": true, 00:08:14.287 "unmap": true, 00:08:14.287 "flush": true, 00:08:14.287 "reset": true, 00:08:14.287 "nvme_admin": true, 00:08:14.287 "nvme_io": true, 00:08:14.287 "nvme_io_md": false, 00:08:14.287 "write_zeroes": true, 00:08:14.287 "zcopy": false, 00:08:14.287 "get_zone_info": false, 00:08:14.287 "zone_management": false, 00:08:14.287 "zone_append": false, 00:08:14.287 "compare": true, 00:08:14.287 "compare_and_write": true, 00:08:14.287 "abort": true, 00:08:14.287 "seek_hole": false, 00:08:14.287 "seek_data": false, 00:08:14.287 "copy": true, 00:08:14.287 "nvme_iov_md": false 00:08:14.287 }, 00:08:14.287 "memory_domains": [ 00:08:14.287 { 00:08:14.287 "dma_device_id": "system", 00:08:14.287 "dma_device_type": 1 00:08:14.287 } 00:08:14.287 ], 00:08:14.287 "driver_specific": { 00:08:14.287 "nvme": [ 00:08:14.287 { 00:08:14.287 "trid": { 00:08:14.287 "trtype": "TCP", 00:08:14.287 "adrfam": "IPv4", 00:08:14.287 "traddr": "10.0.0.2", 00:08:14.287 "trsvcid": "4420", 00:08:14.287 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:14.287 }, 00:08:14.287 "ctrlr_data": { 00:08:14.287 "cntlid": 1, 00:08:14.287 "vendor_id": "0x8086", 00:08:14.287 "model_number": "SPDK bdev Controller", 00:08:14.287 "serial_number": "SPDK0", 00:08:14.287 "firmware_revision": "25.01", 00:08:14.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:14.287 "oacs": { 00:08:14.287 "security": 0, 00:08:14.287 "format": 0, 00:08:14.287 "firmware": 0, 00:08:14.287 "ns_manage": 0 00:08:14.287 }, 00:08:14.287 "multi_ctrlr": true, 00:08:14.287 "ana_reporting": false 00:08:14.287 }, 00:08:14.287 "vs": { 00:08:14.287 "nvme_version": "1.3" 00:08:14.287 }, 00:08:14.287 "ns_data": { 00:08:14.287 "id": 1, 00:08:14.287 "can_share": true 00:08:14.287 } 00:08:14.287 } 00:08:14.287 ], 00:08:14.287 "mp_policy": "active_passive" 00:08:14.287 } 00:08:14.287 } 00:08:14.287 ] 00:08:14.287 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:14.287 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=440340 00:08:14.287 12:42:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:14.287 Running I/O for 10 seconds... 00:08:15.228 Latency(us) 00:08:15.228 [2024-11-25T11:42:55.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.228 Nvme0n1 : 1.00 17847.00 69.71 0.00 0.00 0.00 0.00 0.00 00:08:15.228 [2024-11-25T11:42:55.131Z] =================================================================================================================== 00:08:15.228 [2024-11-25T11:42:55.131Z] Total : 17847.00 69.71 0.00 0.00 0.00 0.00 0.00 00:08:15.228 00:08:16.188 12:42:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:16.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.188 Nvme0n1 : 2.00 17966.00 70.18 0.00 0.00 0.00 0.00 0.00 00:08:16.188 [2024-11-25T11:42:56.091Z] =================================================================================================================== 00:08:16.188 [2024-11-25T11:42:56.092Z] Total : 17966.00 70.18 0.00 0.00 0.00 0.00 0.00 00:08:16.189 00:08:16.449 true 00:08:16.449 12:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:16.449 12:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:16.709 12:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:16.709 12:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:16.709 12:42:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 440340 00:08:17.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.281 Nvme0n1 : 3.00 18023.67 70.40 0.00 0.00 0.00 0.00 0.00 00:08:17.281 [2024-11-25T11:42:57.184Z] =================================================================================================================== 00:08:17.281 [2024-11-25T11:42:57.184Z] Total : 18023.67 70.40 0.00 0.00 0.00 0.00 0.00 00:08:17.281 00:08:18.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.224 Nvme0n1 : 4.00 18055.25 70.53 0.00 0.00 0.00 0.00 0.00 00:08:18.224 [2024-11-25T11:42:58.127Z] =================================================================================================================== 00:08:18.224 [2024-11-25T11:42:58.127Z] Total : 18055.25 70.53 0.00 0.00 0.00 0.00 0.00 00:08:18.224 00:08:19.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.610 Nvme0n1 : 5.00 18080.80 70.63 0.00 0.00 0.00 0.00 0.00 00:08:19.610 [2024-11-25T11:42:59.513Z] =================================================================================================================== 00:08:19.610 [2024-11-25T11:42:59.513Z] Total : 18080.80 70.63 0.00 0.00 0.00 0.00 0.00 00:08:19.610 00:08:20.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.181 Nvme0n1 : 6.00 18103.17 70.72 0.00 0.00 0.00 0.00 0.00 00:08:20.181 [2024-11-25T11:43:00.084Z] =================================================================================================================== 00:08:20.181 [2024-11-25T11:43:00.084Z] Total : 18103.17 70.72 0.00 0.00 0.00 0.00 0.00 00:08:20.181 00:08:21.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.567 Nvme0n1 : 7.00 18110.14 70.74 0.00 0.00 0.00 0.00 0.00 00:08:21.567 [2024-11-25T11:43:01.470Z] =================================================================================================================== 00:08:21.567 [2024-11-25T11:43:01.470Z] Total : 18110.14 70.74 0.00 0.00 0.00 0.00 0.00 00:08:21.567 00:08:22.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.508 Nvme0n1 : 8.00 18127.38 70.81 0.00 0.00 0.00 0.00 0.00 00:08:22.508 [2024-11-25T11:43:02.411Z] =================================================================================================================== 00:08:22.508 [2024-11-25T11:43:02.411Z] Total : 18127.38 70.81 0.00 0.00 0.00 0.00 0.00 00:08:22.508 00:08:23.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.451 Nvme0n1 : 9.00 18136.00 70.84 0.00 0.00 0.00 0.00 0.00 00:08:23.451 [2024-11-25T11:43:03.354Z] =================================================================================================================== 00:08:23.451 [2024-11-25T11:43:03.354Z] Total : 18136.00 70.84 0.00 0.00 0.00 0.00 0.00 00:08:23.451 00:08:24.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.393 Nvme0n1 : 10.00 18150.30 70.90 0.00 0.00 0.00 0.00 0.00 00:08:24.393 [2024-11-25T11:43:04.296Z] =================================================================================================================== 00:08:24.393 [2024-11-25T11:43:04.296Z] Total : 18150.30 70.90 0.00 0.00 0.00 0.00 0.00 00:08:24.393 00:08:24.393 00:08:24.393 Latency(us) 00:08:24.393 [2024-11-25T11:43:04.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.393 Nvme0n1 : 10.01 18151.94 70.91 0.00 0.00 7048.67 4232.53 14745.60 00:08:24.393 [2024-11-25T11:43:04.296Z] =================================================================================================================== 00:08:24.393 [2024-11-25T11:43:04.296Z] Total : 18151.94 70.91 0.00 0.00 7048.67 4232.53 14745.60 00:08:24.393 { 00:08:24.393 "results": [ 00:08:24.393 { 00:08:24.393 "job": "Nvme0n1", 00:08:24.393 "core_mask": "0x2", 00:08:24.393 "workload": "randwrite", 00:08:24.393 "status": "finished", 00:08:24.393 "queue_depth": 128, 00:08:24.393 "io_size": 4096, 00:08:24.393 "runtime": 10.006149, 00:08:24.393 "iops": 18151.938373094385, 00:08:24.393 "mibps": 70.90600926989994, 00:08:24.393 "io_failed": 0, 00:08:24.393 "io_timeout": 0, 00:08:24.393 "avg_latency_us": 7048.668509083435, 00:08:24.393 "min_latency_us": 4232.533333333334, 00:08:24.393 "max_latency_us": 14745.6 00:08:24.393 } 00:08:24.393 ], 00:08:24.393 "core_count": 1 00:08:24.393 } 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 440014 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 440014 ']' 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 440014 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 440014 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 440014' 00:08:24.393 killing process with pid 440014 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 440014 00:08:24.393 Received shutdown signal, test time was about 10.000000 seconds 00:08:24.393 00:08:24.393 Latency(us) 00:08:24.393 [2024-11-25T11:43:04.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.393 [2024-11-25T11:43:04.296Z] =================================================================================================================== 00:08:24.393 [2024-11-25T11:43:04.296Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 440014 00:08:24.393 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.654 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:24.915 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:24.915 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:24.915 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:24.915 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:24.915 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 436201 00:08:24.915 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 436201 00:08:24.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 436201 Killed "${NVMF_APP[@]}" "$@" 00:08:24.915 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:24.915 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:24.915 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:24.915 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.915 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:25.177 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=442387 00:08:25.177 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 442387 00:08:25.177 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 442387 ']' 00:08:25.177 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.177 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.177 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:25.177 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.177 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.177 12:43:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:25.177 [2024-11-25 12:43:04.881019] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:08:25.177 [2024-11-25 12:43:04.881098] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.177 [2024-11-25 12:43:04.966860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.177 [2024-11-25 12:43:05.002620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.177 [2024-11-25 12:43:05.002650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.177 [2024-11-25 12:43:05.002658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.177 [2024-11-25 12:43:05.002665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.177 [2024-11-25 12:43:05.002671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.177 [2024-11-25 12:43:05.003255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.121 [2024-11-25 12:43:05.853756] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:26.121 [2024-11-25 12:43:05.853848] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:26.121 [2024-11-25 12:43:05.853887] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.121 12:43:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:26.382 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0 -t 2000 00:08:26.382 [ 00:08:26.382 { 00:08:26.382 "name": "f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0", 00:08:26.382 "aliases": [ 00:08:26.382 "lvs/lvol" 00:08:26.382 ], 00:08:26.382 "product_name": "Logical Volume", 00:08:26.382 "block_size": 4096, 00:08:26.382 "num_blocks": 38912, 00:08:26.382 "uuid": "f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0", 00:08:26.382 "assigned_rate_limits": { 00:08:26.382 "rw_ios_per_sec": 0, 00:08:26.382 "rw_mbytes_per_sec": 0, 00:08:26.382 "r_mbytes_per_sec": 0, 00:08:26.382 "w_mbytes_per_sec": 0 00:08:26.382 }, 00:08:26.382 "claimed": false, 00:08:26.382 "zoned": false, 00:08:26.382 "supported_io_types": { 00:08:26.382 "read": true, 00:08:26.382 "write": true, 00:08:26.382 "unmap": true, 00:08:26.382 "flush": false, 00:08:26.382 "reset": true, 00:08:26.382 "nvme_admin": false, 00:08:26.382 "nvme_io": false, 00:08:26.382 "nvme_io_md": false, 00:08:26.382 "write_zeroes": true, 00:08:26.382 "zcopy": false, 00:08:26.382 "get_zone_info": false, 00:08:26.382 "zone_management": false, 00:08:26.382 "zone_append": false, 00:08:26.382 "compare": false, 00:08:26.382 "compare_and_write": false, 00:08:26.382 "abort": false, 00:08:26.382 "seek_hole": true, 00:08:26.382 "seek_data": true, 00:08:26.382 "copy": false, 00:08:26.382 "nvme_iov_md": false 00:08:26.382 }, 00:08:26.382 "driver_specific": { 00:08:26.382 "lvol": { 00:08:26.382 "lvol_store_uuid": "a698c791-afd5-4437-83b9-2a3a1d9df337", 00:08:26.382 "base_bdev": "aio_bdev", 00:08:26.382 "thin_provision": false, 00:08:26.382 "num_allocated_clusters": 38, 00:08:26.382 "snapshot": false, 00:08:26.382 "clone": false, 00:08:26.382 "esnap_clone": false 00:08:26.382 } 00:08:26.382 } 00:08:26.382 } 00:08:26.382 ] 00:08:26.382 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:26.382 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:26.382 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:26.643 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:26.643 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:26.643 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:26.643 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:26.643 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.903 [2024-11-25 12:43:06.685880] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:26.903 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:26.903 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:26.903 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:26.903 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.903 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.904 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.904 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.904 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.904 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:26.904 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.904 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:26.904 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:27.165 request: 00:08:27.165 { 00:08:27.165 "uuid": "a698c791-afd5-4437-83b9-2a3a1d9df337", 00:08:27.165 "method": "bdev_lvol_get_lvstores", 00:08:27.165 "req_id": 1 00:08:27.165 } 00:08:27.165 Got JSON-RPC error response 00:08:27.165 response: 00:08:27.165 { 00:08:27.165 "code": -19, 00:08:27.165 "message": "No such device" 00:08:27.165 } 00:08:27.165 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:27.165 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.165 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:27.165 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.165 12:43:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:27.165 aio_bdev 00:08:27.165 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0 00:08:27.165 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0 00:08:27.165 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.165 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:27.165 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.165 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.165 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:27.426 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0 -t 2000 00:08:27.688 [ 00:08:27.688 { 00:08:27.688 "name": "f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0", 00:08:27.688 "aliases": [ 00:08:27.688 "lvs/lvol" 00:08:27.688 ], 00:08:27.688 "product_name": "Logical Volume", 00:08:27.688 "block_size": 4096, 00:08:27.688 "num_blocks": 38912, 00:08:27.688 "uuid": "f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0", 00:08:27.688 "assigned_rate_limits": { 00:08:27.688 "rw_ios_per_sec": 0, 00:08:27.688 "rw_mbytes_per_sec": 0, 00:08:27.688 "r_mbytes_per_sec": 0, 00:08:27.688 "w_mbytes_per_sec": 0 00:08:27.688 }, 00:08:27.688 "claimed": false, 00:08:27.688 "zoned": false, 00:08:27.688 "supported_io_types": { 00:08:27.688 "read": true, 00:08:27.688 "write": true, 00:08:27.688 "unmap": true, 00:08:27.688 "flush": false, 00:08:27.688 "reset": true, 00:08:27.688 "nvme_admin": false, 00:08:27.688 "nvme_io": false, 00:08:27.688 "nvme_io_md": false, 00:08:27.688 "write_zeroes": true, 00:08:27.688 "zcopy": false, 00:08:27.688 "get_zone_info": false, 00:08:27.688 "zone_management": false, 00:08:27.688 "zone_append": false, 00:08:27.688 "compare": false, 00:08:27.688 "compare_and_write": false, 00:08:27.688 "abort": false, 00:08:27.688 "seek_hole": true, 00:08:27.688 "seek_data": true, 00:08:27.688 "copy": false, 00:08:27.688 "nvme_iov_md": false 00:08:27.688 }, 00:08:27.688 "driver_specific": { 00:08:27.688 "lvol": { 00:08:27.688 "lvol_store_uuid": "a698c791-afd5-4437-83b9-2a3a1d9df337", 00:08:27.688 "base_bdev": "aio_bdev", 00:08:27.688 "thin_provision": false, 00:08:27.688 "num_allocated_clusters": 38, 00:08:27.688 "snapshot": false, 00:08:27.688 "clone": false, 00:08:27.688 "esnap_clone": false 00:08:27.688 } 00:08:27.688 } 00:08:27.688 } 00:08:27.688 ] 00:08:27.688 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:27.688 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:27.688 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:27.688 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:27.688 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:27.688 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:27.948 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:27.949 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f2034c1b-e58e-4dba-a06d-7a79e6c3f7f0 00:08:28.209 12:43:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a698c791-afd5-4437-83b9-2a3a1d9df337 00:08:28.209 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.470 00:08:28.470 real 0m17.446s 00:08:28.470 user 0m45.778s 00:08:28.470 sys 0m2.822s 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.470 ************************************ 00:08:28.470 END TEST lvs_grow_dirty 00:08:28.470 ************************************ 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:28.470 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:28.470 nvmf_trace.0 00:08:28.731 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.732 rmmod nvme_tcp 00:08:28.732 rmmod nvme_fabrics 00:08:28.732 rmmod nvme_keyring 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 442387 ']' 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 442387 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 442387 ']' 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 442387 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442387 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442387' 00:08:28.732 killing process with pid 442387 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 442387 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 442387 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.732 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.993 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.993 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:28.993 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:28.993 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.993 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.994 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.994 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.994 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.994 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.994 12:43:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.908 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.908 00:08:30.908 real 0m45.306s 00:08:30.908 user 1m7.533s 00:08:30.908 sys 0m11.079s 00:08:30.908 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.908 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.908 ************************************ 00:08:30.908 END TEST nvmf_lvs_grow 00:08:30.908 ************************************ 00:08:30.908 12:43:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:30.908 12:43:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.908 12:43:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.908 12:43:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.908 ************************************ 00:08:30.908 START TEST nvmf_bdev_io_wait 00:08:30.908 ************************************ 00:08:30.908 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:31.170 * Looking for test storage... 00:08:31.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.170 --rc genhtml_branch_coverage=1 00:08:31.170 --rc genhtml_function_coverage=1 00:08:31.170 --rc genhtml_legend=1 00:08:31.170 --rc geninfo_all_blocks=1 00:08:31.170 --rc geninfo_unexecuted_blocks=1 00:08:31.170 00:08:31.170 ' 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.170 --rc genhtml_branch_coverage=1 00:08:31.170 --rc genhtml_function_coverage=1 00:08:31.170 --rc genhtml_legend=1 00:08:31.170 --rc geninfo_all_blocks=1 00:08:31.170 --rc geninfo_unexecuted_blocks=1 00:08:31.170 00:08:31.170 ' 00:08:31.170 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.170 --rc genhtml_branch_coverage=1 00:08:31.170 --rc genhtml_function_coverage=1 00:08:31.170 --rc genhtml_legend=1 00:08:31.171 --rc geninfo_all_blocks=1 00:08:31.171 --rc geninfo_unexecuted_blocks=1 00:08:31.171 00:08:31.171 ' 00:08:31.171 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.171 --rc genhtml_branch_coverage=1 00:08:31.171 --rc genhtml_function_coverage=1 00:08:31.171 --rc genhtml_legend=1 00:08:31.171 --rc geninfo_all_blocks=1 00:08:31.171 --rc geninfo_unexecuted_blocks=1 00:08:31.171 00:08:31.171 ' 00:08:31.171 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.171 12:43:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:31.171 12:43:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:39.316 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:39.317 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:39.317 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:39.317 Found net devices under 0000:31:00.0: cvl_0_0 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:39.317 Found net devices under 0000:31:00.1: cvl_0_1 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:39.317 12:43:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.317 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.317 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.317 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.317 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.317 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.579 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.579 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:08:39.580 00:08:39.580 --- 10.0.0.2 ping statistics --- 00:08:39.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.580 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:08:39.580 00:08:39.580 --- 10.0.0.1 ping statistics --- 00:08:39.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.580 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=447986 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 447986 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 447986 ']' 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.580 12:43:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.580 [2024-11-25 12:43:19.433277] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:08:39.580 [2024-11-25 12:43:19.433339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.841 [2024-11-25 12:43:19.528278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.841 [2024-11-25 12:43:19.571622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.841 [2024-11-25 12:43:19.571663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.841 [2024-11-25 12:43:19.571671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.841 [2024-11-25 12:43:19.571678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.841 [2024-11-25 12:43:19.571684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.841 [2024-11-25 12:43:19.573345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.841 [2024-11-25 12:43:19.573460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.841 [2024-11-25 12:43:19.573619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.841 [2024-11-25 12:43:19.573619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.413 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.675 [2024-11-25 12:43:20.349902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.675 Malloc0 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.675 [2024-11-25 12:43:20.409112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=448187 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=448189 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.675 { 00:08:40.675 "params": { 00:08:40.675 "name": "Nvme$subsystem", 00:08:40.675 "trtype": "$TEST_TRANSPORT", 00:08:40.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.675 "adrfam": "ipv4", 00:08:40.675 "trsvcid": "$NVMF_PORT", 00:08:40.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.675 "hdgst": ${hdgst:-false}, 00:08:40.675 "ddgst": ${ddgst:-false} 00:08:40.675 }, 00:08:40.675 "method": "bdev_nvme_attach_controller" 00:08:40.675 } 00:08:40.675 EOF 00:08:40.675 )") 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=448191 00:08:40.675 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.676 { 00:08:40.676 "params": { 00:08:40.676 "name": "Nvme$subsystem", 00:08:40.676 "trtype": "$TEST_TRANSPORT", 00:08:40.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.676 "adrfam": "ipv4", 00:08:40.676 "trsvcid": "$NVMF_PORT", 00:08:40.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.676 "hdgst": ${hdgst:-false}, 00:08:40.676 "ddgst": ${ddgst:-false} 00:08:40.676 }, 00:08:40.676 "method": "bdev_nvme_attach_controller" 00:08:40.676 } 00:08:40.676 EOF 00:08:40.676 )") 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=448194 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.676 { 00:08:40.676 "params": { 00:08:40.676 "name": "Nvme$subsystem", 00:08:40.676 "trtype": "$TEST_TRANSPORT", 00:08:40.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.676 "adrfam": "ipv4", 00:08:40.676 "trsvcid": "$NVMF_PORT", 00:08:40.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.676 "hdgst": ${hdgst:-false}, 00:08:40.676 "ddgst": ${ddgst:-false} 00:08:40.676 }, 00:08:40.676 "method": "bdev_nvme_attach_controller" 00:08:40.676 } 00:08:40.676 EOF 00:08:40.676 )") 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.676 { 00:08:40.676 "params": { 00:08:40.676 "name": "Nvme$subsystem", 00:08:40.676 "trtype": "$TEST_TRANSPORT", 00:08:40.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.676 "adrfam": "ipv4", 00:08:40.676 "trsvcid": "$NVMF_PORT", 00:08:40.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.676 "hdgst": ${hdgst:-false}, 00:08:40.676 "ddgst": ${ddgst:-false} 00:08:40.676 }, 00:08:40.676 "method": "bdev_nvme_attach_controller" 00:08:40.676 } 00:08:40.676 EOF 00:08:40.676 )") 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 448187 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.676 "params": { 00:08:40.676 "name": "Nvme1", 00:08:40.676 "trtype": "tcp", 00:08:40.676 "traddr": "10.0.0.2", 00:08:40.676 "adrfam": "ipv4", 00:08:40.676 "trsvcid": "4420", 00:08:40.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.676 "hdgst": false, 00:08:40.676 "ddgst": false 00:08:40.676 }, 00:08:40.676 "method": "bdev_nvme_attach_controller" 00:08:40.676 }' 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.676 "params": { 00:08:40.676 "name": "Nvme1", 00:08:40.676 "trtype": "tcp", 00:08:40.676 "traddr": "10.0.0.2", 00:08:40.676 "adrfam": "ipv4", 00:08:40.676 "trsvcid": "4420", 00:08:40.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.676 "hdgst": false, 00:08:40.676 "ddgst": false 00:08:40.676 }, 00:08:40.676 "method": "bdev_nvme_attach_controller" 00:08:40.676 }' 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.676 "params": { 00:08:40.676 "name": "Nvme1", 00:08:40.676 "trtype": "tcp", 00:08:40.676 "traddr": "10.0.0.2", 00:08:40.676 "adrfam": "ipv4", 00:08:40.676 "trsvcid": "4420", 00:08:40.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.676 "hdgst": false, 00:08:40.676 "ddgst": false 00:08:40.676 }, 00:08:40.676 "method": "bdev_nvme_attach_controller" 00:08:40.676 }' 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:40.676 12:43:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.676 "params": { 00:08:40.676 "name": "Nvme1", 00:08:40.676 "trtype": "tcp", 00:08:40.676 "traddr": "10.0.0.2", 00:08:40.676 "adrfam": "ipv4", 00:08:40.676 "trsvcid": "4420", 00:08:40.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.676 "hdgst": false, 00:08:40.676 "ddgst": false 00:08:40.676 }, 00:08:40.676 "method": "bdev_nvme_attach_controller" 00:08:40.676 }' 00:08:40.676 [2024-11-25 12:43:20.463012] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:08:40.676 [2024-11-25 12:43:20.463065] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:40.676 [2024-11-25 12:43:20.467146] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:08:40.676 [2024-11-25 12:43:20.467192] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:40.676 [2024-11-25 12:43:20.467618] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:08:40.676 [2024-11-25 12:43:20.467666] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:40.676 [2024-11-25 12:43:20.469256] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:08:40.676 [2024-11-25 12:43:20.469302] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:40.938 [2024-11-25 12:43:20.627121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.938 [2024-11-25 12:43:20.656153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:40.938 [2024-11-25 12:43:20.680803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.938 [2024-11-25 12:43:20.710137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:40.938 [2024-11-25 12:43:20.710665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.938 [2024-11-25 12:43:20.738709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:40.938 [2024-11-25 12:43:20.784901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.938 [2024-11-25 12:43:20.813553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:41.199 Running I/O for 1 seconds... 00:08:41.199 Running I/O for 1 seconds... 00:08:41.199 Running I/O for 1 seconds... 00:08:41.199 Running I/O for 1 seconds... 00:08:42.142 181872.00 IOPS, 710.44 MiB/s 00:08:42.142 Latency(us) 00:08:42.142 [2024-11-25T11:43:22.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.142 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:42.142 Nvme1n1 : 1.00 181517.33 709.05 0.00 0.00 701.19 302.08 1966.08 00:08:42.142 [2024-11-25T11:43:22.045Z] =================================================================================================================== 00:08:42.142 [2024-11-25T11:43:22.045Z] Total : 181517.33 709.05 0.00 0.00 701.19 302.08 1966.08 00:08:42.142 7748.00 IOPS, 30.27 MiB/s 00:08:42.142 Latency(us) 00:08:42.142 [2024-11-25T11:43:22.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.142 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:42.142 Nvme1n1 : 1.02 7766.61 30.34 0.00 0.00 16375.16 7755.09 23156.05 00:08:42.142 [2024-11-25T11:43:22.045Z] =================================================================================================================== 00:08:42.142 [2024-11-25T11:43:22.045Z] Total : 7766.61 30.34 0.00 0.00 16375.16 7755.09 23156.05 00:08:42.142 17262.00 IOPS, 67.43 MiB/s 00:08:42.142 Latency(us) 00:08:42.142 [2024-11-25T11:43:22.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.142 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:42.143 Nvme1n1 : 1.01 17325.49 67.68 0.00 0.00 7367.01 3549.87 17476.27 00:08:42.143 [2024-11-25T11:43:22.046Z] =================================================================================================================== 00:08:42.143 [2024-11-25T11:43:22.046Z] Total : 17325.49 67.68 0.00 0.00 7367.01 3549.87 17476.27 00:08:42.143 7693.00 IOPS, 30.05 MiB/s 00:08:42.143 Latency(us) 00:08:42.143 [2024-11-25T11:43:22.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.143 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:42.143 Nvme1n1 : 1.01 7823.13 30.56 0.00 0.00 16319.63 3549.87 36700.16 00:08:42.143 [2024-11-25T11:43:22.046Z] =================================================================================================================== 00:08:42.143 [2024-11-25T11:43:22.046Z] Total : 7823.13 30.56 0.00 0.00 16319.63 3549.87 36700.16 00:08:42.403 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 448189 00:08:42.403 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 448191 00:08:42.403 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 448194 00:08:42.403 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.403 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.403 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:42.403 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.403 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:42.403 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.404 rmmod nvme_tcp 00:08:42.404 rmmod nvme_fabrics 00:08:42.404 rmmod nvme_keyring 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 447986 ']' 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 447986 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 447986 ']' 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 447986 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 447986 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 447986' 00:08:42.404 killing process with pid 447986 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 447986 00:08:42.404 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 447986 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.665 12:43:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.579 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:44.579 00:08:44.579 real 0m13.638s 00:08:44.579 user 0m18.705s 00:08:44.579 sys 0m7.720s 00:08:44.579 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.579 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:44.579 ************************************ 00:08:44.579 END TEST nvmf_bdev_io_wait 00:08:44.579 ************************************ 00:08:44.579 12:43:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:44.579 12:43:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.579 12:43:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.579 12:43:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.842 ************************************ 00:08:44.842 START TEST nvmf_queue_depth 00:08:44.842 ************************************ 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:44.842 * Looking for test storage... 00:08:44.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:44.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.842 --rc genhtml_branch_coverage=1 00:08:44.842 --rc genhtml_function_coverage=1 00:08:44.842 --rc genhtml_legend=1 00:08:44.842 --rc geninfo_all_blocks=1 00:08:44.842 --rc geninfo_unexecuted_blocks=1 00:08:44.842 00:08:44.842 ' 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:44.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.842 --rc genhtml_branch_coverage=1 00:08:44.842 --rc genhtml_function_coverage=1 00:08:44.842 --rc genhtml_legend=1 00:08:44.842 --rc geninfo_all_blocks=1 00:08:44.842 --rc geninfo_unexecuted_blocks=1 00:08:44.842 00:08:44.842 ' 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:44.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.842 --rc genhtml_branch_coverage=1 00:08:44.842 --rc genhtml_function_coverage=1 00:08:44.842 --rc genhtml_legend=1 00:08:44.842 --rc geninfo_all_blocks=1 00:08:44.842 --rc geninfo_unexecuted_blocks=1 00:08:44.842 00:08:44.842 ' 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:44.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.842 --rc genhtml_branch_coverage=1 00:08:44.842 --rc genhtml_function_coverage=1 00:08:44.842 --rc genhtml_legend=1 00:08:44.842 --rc geninfo_all_blocks=1 00:08:44.842 --rc geninfo_unexecuted_blocks=1 00:08:44.842 00:08:44.842 ' 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.842 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:44.843 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.843 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:44.843 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.843 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.843 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.843 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.843 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.843 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.843 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.843 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.843 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.104 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:45.104 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:45.104 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:45.104 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:45.104 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.104 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.104 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.105 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.105 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.105 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.105 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.105 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.105 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.105 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.105 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.105 12:43:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.251 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:53.252 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:53.252 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:53.252 Found net devices under 0000:31:00.0: cvl_0_0 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:53.252 Found net devices under 0000:31:00.1: cvl_0_1 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.252 12:43:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.252 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.252 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.252 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.252 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.252 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.252 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.252 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.514 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:08:53.514 00:08:53.514 --- 10.0.0.2 ping statistics --- 00:08:53.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.514 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:08:53.515 00:08:53.515 --- 10.0.0.1 ping statistics --- 00:08:53.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.515 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=453376 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 453376 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 453376 ']' 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.515 12:43:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.515 [2024-11-25 12:43:33.284101] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:08:53.515 [2024-11-25 12:43:33.284175] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.515 [2024-11-25 12:43:33.398237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.777 [2024-11-25 12:43:33.449740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.777 [2024-11-25 12:43:33.449789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.777 [2024-11-25 12:43:33.449797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.777 [2024-11-25 12:43:33.449805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.777 [2024-11-25 12:43:33.449811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.777 [2024-11-25 12:43:33.450600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.348 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.348 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:54.348 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.348 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.348 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.349 [2024-11-25 12:43:34.147506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.349 Malloc0 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.349 [2024-11-25 12:43:34.192622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=453607 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 453607 /var/tmp/bdevperf.sock 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 453607 ']' 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.349 12:43:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.349 [2024-11-25 12:43:34.249294] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:08:54.349 [2024-11-25 12:43:34.249370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid453607 ] 00:08:54.608 [2024-11-25 12:43:34.332242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.608 [2024-11-25 12:43:34.373798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.251 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.251 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:55.251 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:55.251 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.251 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.549 NVMe0n1 00:08:55.549 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.549 12:43:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.549 Running I/O for 10 seconds... 00:08:57.463 10247.00 IOPS, 40.03 MiB/s [2024-11-25T11:43:38.308Z] 11121.00 IOPS, 43.44 MiB/s [2024-11-25T11:43:39.693Z] 11285.33 IOPS, 44.08 MiB/s [2024-11-25T11:43:40.264Z] 11511.75 IOPS, 44.97 MiB/s [2024-11-25T11:43:41.650Z] 11625.80 IOPS, 45.41 MiB/s [2024-11-25T11:43:42.594Z] 11643.67 IOPS, 45.48 MiB/s [2024-11-25T11:43:43.535Z] 11695.29 IOPS, 45.68 MiB/s [2024-11-25T11:43:44.475Z] 11765.12 IOPS, 45.96 MiB/s [2024-11-25T11:43:45.416Z] 11797.56 IOPS, 46.08 MiB/s [2024-11-25T11:43:45.416Z] 11789.70 IOPS, 46.05 MiB/s 00:09:05.513 Latency(us) 00:09:05.513 [2024-11-25T11:43:45.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.513 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:05.513 Verification LBA range: start 0x0 length 0x4000 00:09:05.513 NVMe0n1 : 10.04 11831.02 46.21 0.00 0.00 86254.01 2730.67 62040.75 00:09:05.513 [2024-11-25T11:43:45.416Z] =================================================================================================================== 00:09:05.513 [2024-11-25T11:43:45.416Z] Total : 11831.02 46.21 0.00 0.00 86254.01 2730.67 62040.75 00:09:05.513 { 00:09:05.513 "results": [ 00:09:05.513 { 00:09:05.513 "job": "NVMe0n1", 00:09:05.513 "core_mask": "0x1", 00:09:05.513 "workload": "verify", 00:09:05.513 "status": "finished", 00:09:05.513 "verify_range": { 00:09:05.513 "start": 0, 00:09:05.513 "length": 16384 00:09:05.513 }, 00:09:05.513 "queue_depth": 1024, 00:09:05.513 "io_size": 4096, 00:09:05.513 "runtime": 10.04089, 00:09:05.513 "iops": 11831.022947169025, 00:09:05.513 "mibps": 46.214933387379006, 00:09:05.513 "io_failed": 0, 00:09:05.513 "io_timeout": 0, 00:09:05.513 "avg_latency_us": 86254.00674467285, 00:09:05.513 "min_latency_us": 2730.6666666666665, 00:09:05.513 "max_latency_us": 62040.746666666666 00:09:05.513 } 00:09:05.513 ], 00:09:05.513 "core_count": 1 00:09:05.513 } 00:09:05.513 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 453607 00:09:05.513 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 453607 ']' 00:09:05.513 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 453607 00:09:05.513 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:05.513 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.513 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453607 00:09:05.513 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.513 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.513 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453607' 00:09:05.513 killing process with pid 453607 00:09:05.513 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 453607 00:09:05.513 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.513 00:09:05.513 Latency(us) 00:09:05.513 [2024-11-25T11:43:45.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.513 [2024-11-25T11:43:45.416Z] =================================================================================================================== 00:09:05.513 [2024-11-25T11:43:45.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.513 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 453607 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.774 rmmod nvme_tcp 00:09:05.774 rmmod nvme_fabrics 00:09:05.774 rmmod nvme_keyring 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 453376 ']' 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 453376 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 453376 ']' 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 453376 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453376 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453376' 00:09:05.774 killing process with pid 453376 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 453376 00:09:05.774 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 453376 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.034 12:43:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.580 12:43:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:08.580 00:09:08.580 real 0m23.350s 00:09:08.580 user 0m25.971s 00:09:08.580 sys 0m7.625s 00:09:08.580 12:43:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.580 12:43:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.580 ************************************ 00:09:08.580 END TEST nvmf_queue_depth 00:09:08.580 ************************************ 00:09:08.580 12:43:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:08.580 12:43:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.580 12:43:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.580 12:43:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.580 ************************************ 00:09:08.580 START TEST nvmf_target_multipath 00:09:08.580 ************************************ 00:09:08.580 12:43:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:08.580 * Looking for test storage... 00:09:08.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:08.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.580 --rc genhtml_branch_coverage=1 00:09:08.580 --rc genhtml_function_coverage=1 00:09:08.580 --rc genhtml_legend=1 00:09:08.580 --rc geninfo_all_blocks=1 00:09:08.580 --rc geninfo_unexecuted_blocks=1 00:09:08.580 00:09:08.580 ' 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:08.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.580 --rc genhtml_branch_coverage=1 00:09:08.580 --rc genhtml_function_coverage=1 00:09:08.580 --rc genhtml_legend=1 00:09:08.580 --rc geninfo_all_blocks=1 00:09:08.580 --rc geninfo_unexecuted_blocks=1 00:09:08.580 00:09:08.580 ' 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:08.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.580 --rc genhtml_branch_coverage=1 00:09:08.580 --rc genhtml_function_coverage=1 00:09:08.580 --rc genhtml_legend=1 00:09:08.580 --rc geninfo_all_blocks=1 00:09:08.580 --rc geninfo_unexecuted_blocks=1 00:09:08.580 00:09:08.580 ' 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:08.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.580 --rc genhtml_branch_coverage=1 00:09:08.580 --rc genhtml_function_coverage=1 00:09:08.580 --rc genhtml_legend=1 00:09:08.580 --rc geninfo_all_blocks=1 00:09:08.580 --rc geninfo_unexecuted_blocks=1 00:09:08.580 00:09:08.580 ' 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.580 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:08.581 12:43:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:16.717 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:16.717 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:16.717 Found net devices under 0000:31:00.0: cvl_0_0 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:16.717 Found net devices under 0000:31:00.1: cvl_0_1 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.717 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:16.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:09:16.718 00:09:16.718 --- 10.0.0.2 ping statistics --- 00:09:16.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.718 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:09:16.718 00:09:16.718 --- 10.0.0.1 ping statistics --- 00:09:16.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.718 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:16.718 only one NIC for nvmf test 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:16.718 rmmod nvme_tcp 00:09:16.718 rmmod nvme_fabrics 00:09:16.718 rmmod nvme_keyring 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.718 12:43:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.266 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.266 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:19.266 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:19.266 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.266 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.267 00:09:19.267 real 0m10.749s 00:09:19.267 user 0m2.449s 00:09:19.267 sys 0m6.246s 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:19.267 ************************************ 00:09:19.267 END TEST nvmf_target_multipath 00:09:19.267 ************************************ 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.267 ************************************ 00:09:19.267 START TEST nvmf_zcopy 00:09:19.267 ************************************ 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:19.267 * Looking for test storage... 00:09:19.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.267 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.268 --rc genhtml_branch_coverage=1 00:09:19.268 --rc genhtml_function_coverage=1 00:09:19.268 --rc genhtml_legend=1 00:09:19.268 --rc geninfo_all_blocks=1 00:09:19.268 --rc geninfo_unexecuted_blocks=1 00:09:19.268 00:09:19.268 ' 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.268 --rc genhtml_branch_coverage=1 00:09:19.268 --rc genhtml_function_coverage=1 00:09:19.268 --rc genhtml_legend=1 00:09:19.268 --rc geninfo_all_blocks=1 00:09:19.268 --rc geninfo_unexecuted_blocks=1 00:09:19.268 00:09:19.268 ' 00:09:19.268 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.268 --rc genhtml_branch_coverage=1 00:09:19.269 --rc genhtml_function_coverage=1 00:09:19.269 --rc genhtml_legend=1 00:09:19.269 --rc geninfo_all_blocks=1 00:09:19.269 --rc geninfo_unexecuted_blocks=1 00:09:19.269 00:09:19.269 ' 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.269 --rc genhtml_branch_coverage=1 00:09:19.269 --rc genhtml_function_coverage=1 00:09:19.269 --rc genhtml_legend=1 00:09:19.269 --rc geninfo_all_blocks=1 00:09:19.269 --rc geninfo_unexecuted_blocks=1 00:09:19.269 00:09:19.269 ' 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.269 12:43:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.269 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.269 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.269 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.269 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.269 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.270 12:43:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:27.412 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:27.412 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:27.412 Found net devices under 0000:31:00.0: cvl_0_0 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.412 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:27.413 Found net devices under 0000:31:00.1: cvl_0_1 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:27.413 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:27.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:09:27.673 00:09:27.673 --- 10.0.0.2 ping statistics --- 00:09:27.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.673 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:09:27.673 00:09:27.673 --- 10.0.0.1 ping statistics --- 00:09:27.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.673 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=465337 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 465337 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 465337 ']' 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.673 12:44:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.673 [2024-11-25 12:44:07.530650] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:09:27.673 [2024-11-25 12:44:07.530700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.934 [2024-11-25 12:44:07.635812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.934 [2024-11-25 12:44:07.679821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.934 [2024-11-25 12:44:07.679881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.934 [2024-11-25 12:44:07.679890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.934 [2024-11-25 12:44:07.679897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.934 [2024-11-25 12:44:07.679904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.934 [2024-11-25 12:44:07.680630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.507 [2024-11-25 12:44:08.387208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.507 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.768 [2024-11-25 12:44:08.411543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.768 malloc0 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:28.768 { 00:09:28.768 "params": { 00:09:28.768 "name": "Nvme$subsystem", 00:09:28.768 "trtype": "$TEST_TRANSPORT", 00:09:28.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:28.768 "adrfam": "ipv4", 00:09:28.768 "trsvcid": "$NVMF_PORT", 00:09:28.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:28.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:28.768 "hdgst": ${hdgst:-false}, 00:09:28.768 "ddgst": ${ddgst:-false} 00:09:28.768 }, 00:09:28.768 "method": "bdev_nvme_attach_controller" 00:09:28.768 } 00:09:28.768 EOF 00:09:28.768 )") 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:28.768 12:44:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:28.768 "params": { 00:09:28.768 "name": "Nvme1", 00:09:28.768 "trtype": "tcp", 00:09:28.768 "traddr": "10.0.0.2", 00:09:28.768 "adrfam": "ipv4", 00:09:28.768 "trsvcid": "4420", 00:09:28.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:28.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:28.768 "hdgst": false, 00:09:28.768 "ddgst": false 00:09:28.768 }, 00:09:28.768 "method": "bdev_nvme_attach_controller" 00:09:28.768 }' 00:09:28.768 [2024-11-25 12:44:08.514242] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:09:28.768 [2024-11-25 12:44:08.514314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465686 ] 00:09:28.768 [2024-11-25 12:44:08.599418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.768 [2024-11-25 12:44:08.640899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.029 Running I/O for 10 seconds... 00:09:30.914 6651.00 IOPS, 51.96 MiB/s [2024-11-25T11:44:12.199Z] 6837.50 IOPS, 53.42 MiB/s [2024-11-25T11:44:13.139Z] 7817.33 IOPS, 61.07 MiB/s [2024-11-25T11:44:14.080Z] 8305.00 IOPS, 64.88 MiB/s [2024-11-25T11:44:15.021Z] 8599.60 IOPS, 67.18 MiB/s [2024-11-25T11:44:15.960Z] 8796.33 IOPS, 68.72 MiB/s [2024-11-25T11:44:16.901Z] 8936.29 IOPS, 69.81 MiB/s [2024-11-25T11:44:17.844Z] 9040.25 IOPS, 70.63 MiB/s [2024-11-25T11:44:19.230Z] 9124.00 IOPS, 71.28 MiB/s [2024-11-25T11:44:19.230Z] 9189.40 IOPS, 71.79 MiB/s 00:09:39.327 Latency(us) 00:09:39.327 [2024-11-25T11:44:19.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.327 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:39.327 Verification LBA range: start 0x0 length 0x1000 00:09:39.327 Nvme1n1 : 10.01 9189.06 71.79 0.00 0.00 13877.46 1706.67 28398.93 00:09:39.327 [2024-11-25T11:44:19.230Z] =================================================================================================================== 00:09:39.327 [2024-11-25T11:44:19.230Z] Total : 9189.06 71.79 0.00 0.00 13877.46 1706.67 28398.93 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=467701 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.327 { 00:09:39.327 "params": { 00:09:39.327 "name": "Nvme$subsystem", 00:09:39.327 "trtype": "$TEST_TRANSPORT", 00:09:39.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.327 "adrfam": "ipv4", 00:09:39.327 "trsvcid": "$NVMF_PORT", 00:09:39.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.327 "hdgst": ${hdgst:-false}, 00:09:39.327 "ddgst": ${ddgst:-false} 00:09:39.327 }, 00:09:39.327 "method": "bdev_nvme_attach_controller" 00:09:39.327 } 00:09:39.327 EOF 00:09:39.327 )") 00:09:39.327 [2024-11-25 12:44:18.943857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:18.943892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:39.327 12:44:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.327 "params": { 00:09:39.327 "name": "Nvme1", 00:09:39.327 "trtype": "tcp", 00:09:39.327 "traddr": "10.0.0.2", 00:09:39.327 "adrfam": "ipv4", 00:09:39.327 "trsvcid": "4420", 00:09:39.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.327 "hdgst": false, 00:09:39.327 "ddgst": false 00:09:39.327 }, 00:09:39.327 "method": "bdev_nvme_attach_controller" 00:09:39.327 }' 00:09:39.327 [2024-11-25 12:44:18.955852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:18.955866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:18.967885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:18.967893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:18.979913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:18.979920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:18.991059] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:09:39.327 [2024-11-25 12:44:18.991107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467701 ] 00:09:39.327 [2024-11-25 12:44:18.991945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:18.991951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.003976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.003983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.016006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.016013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.028036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.028043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.040069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.040076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.052101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.052108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.064131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.064137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.072895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.327 [2024-11-25 12:44:19.076162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.076169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.088193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.088202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.100224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.100234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.108373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.327 [2024-11-25 12:44:19.112252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.112264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.124290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.124301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.136319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.136331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.148346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.148355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.160378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.160388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.172407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.172414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.184454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.184470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.196478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.196488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.208511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.208520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.327 [2024-11-25 12:44:19.220540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.327 [2024-11-25 12:44:19.220549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.232569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.232578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.244608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.244623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 Running I/O for 5 seconds... 00:09:39.589 [2024-11-25 12:44:19.256631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.256637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.272382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.272398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.286163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.286178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.299633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.299649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.313300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.313316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.326292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.326309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.339309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.339326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.352137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.352156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.365556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.365571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.378628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.378643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.391301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.391316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.403998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.404014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.417413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.417428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.430997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.431012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.443529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.443544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.456426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.456441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.469979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.469994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.589 [2024-11-25 12:44:19.482479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.589 [2024-11-25 12:44:19.482494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.849 [2024-11-25 12:44:19.495000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.849 [2024-11-25 12:44:19.495016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.849 [2024-11-25 12:44:19.508581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.849 [2024-11-25 12:44:19.508596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.849 [2024-11-25 12:44:19.522172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.849 [2024-11-25 12:44:19.522186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.534917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.534931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.547713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.547728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.560335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.560350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.573198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.573213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.586662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.586677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.599628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.599643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.613051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.613066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.625818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.625833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.639327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.639342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.652322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.652337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.665625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.665640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.678494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.678510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.691000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.691015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.704031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.704046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.717272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.717287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.730540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.730555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.850 [2024-11-25 12:44:19.743536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.850 [2024-11-25 12:44:19.743552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.756165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.756181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.768903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.768918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.782311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.782326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.795111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.795127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.808008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.808024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.820578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.820593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.833762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.833778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.846305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.846320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.859459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.859475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.872621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.872636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.886102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.886117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.899097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.899112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.911881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.911896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.925115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.925130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.938170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.938185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.950845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.950861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.963594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.963609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.976345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.976360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:19.989615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:19.989630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.111 [2024-11-25 12:44:20.002999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.111 [2024-11-25 12:44:20.003016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.015803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.015819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.028970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.028986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.041647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.041662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.054301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.054316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.068003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.068018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.081844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.081858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.094411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.094426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.107975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.107990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.121155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.121171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.134307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.134323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.147867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.147883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.161481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.161496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.174129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.174144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.187665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.187680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.201019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.201034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.214379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.214395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.227817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.227833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.240466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.240481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 [2024-11-25 12:44:20.252846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.252866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.373 19046.00 IOPS, 148.80 MiB/s [2024-11-25T11:44:20.276Z] [2024-11-25 12:44:20.265379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.373 [2024-11-25 12:44:20.265394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.278008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.278023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.291249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.291265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.304132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.304147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.317307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.317322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.330595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.330615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.343893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.343908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.357476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.357491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.370447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.370462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.383365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.383379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.396950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.396965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.410213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.410227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.423800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.423814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.436366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.436381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.449692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.449707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.463078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.463093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.475404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.475419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.489306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.489321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.503100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.503115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.515815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.515829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.634 [2024-11-25 12:44:20.528622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.634 [2024-11-25 12:44:20.528636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.541655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.541669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.555200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.555215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.566496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.566511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.580116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.580134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.593555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.593569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.607231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.607245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.620482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.620496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.634038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.634053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.646458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.646473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.659189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.659204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.672545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.672560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.685552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.685566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.698186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.698200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.895 [2024-11-25 12:44:20.711797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.895 [2024-11-25 12:44:20.711812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.896 [2024-11-25 12:44:20.725443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.896 [2024-11-25 12:44:20.725458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.896 [2024-11-25 12:44:20.738195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.896 [2024-11-25 12:44:20.738209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.896 [2024-11-25 12:44:20.751358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.896 [2024-11-25 12:44:20.751373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.896 [2024-11-25 12:44:20.764146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.896 [2024-11-25 12:44:20.764160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.896 [2024-11-25 12:44:20.777641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.896 [2024-11-25 12:44:20.777655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.896 [2024-11-25 12:44:20.791088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.896 [2024-11-25 12:44:20.791103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.804491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.804506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.817783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.817797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.831134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.831153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.843734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.843749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.856821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.856835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.870299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.870314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.883714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.883729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.897132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.897147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.910243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.910258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.922993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.923007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.936052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.936067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.948435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.948450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.961650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.961664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.974908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.974923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:20.987890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:20.987904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:21.000146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:21.000161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:21.013588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:21.013603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:21.027039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:21.027054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:21.040420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:21.040435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.156 [2024-11-25 12:44:21.053272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.156 [2024-11-25 12:44:21.053287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.065765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.065780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.078490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.078508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.091222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.091236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.104733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.104748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.117497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.117512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.130594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.130608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.143216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.143231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.156520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.156534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.169674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.169688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.182993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.183008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.196688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.196703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.210373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.210388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.222903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.222918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.235269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.235283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.247943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.247958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 19185.00 IOPS, 149.88 MiB/s [2024-11-25T11:44:21.320Z] [2024-11-25 12:44:21.261018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.261033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.273262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.273276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.286465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.286479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.299821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.299836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.417 [2024-11-25 12:44:21.313550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.417 [2024-11-25 12:44:21.313565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.326212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.326227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.338960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.338974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.352601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.352616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.365755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.365769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.378462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.378478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.390990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.391006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.404017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.404032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.417081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.417096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.430570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.430586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.444139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.444154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.457461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.457476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.470873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.470888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.483970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.483985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.497287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.497302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.510436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.510451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.523148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.523164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.535586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.535601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.548236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.548251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.560554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.560569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.679 [2024-11-25 12:44:21.573340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.679 [2024-11-25 12:44:21.573355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.586584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.586599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.599613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.599628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.612372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.612387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.626094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.626109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.639310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.639325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.652827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.652842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.665187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.665202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.678177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.678192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.691874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.691889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.704613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.704628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.717124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.717138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.730231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.730246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.743076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.743091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.755529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.755543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.768357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.768372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.781975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.781989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.794497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.794513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.807265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.807284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.819393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.819408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.940 [2024-11-25 12:44:21.832195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.940 [2024-11-25 12:44:21.832210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.200 [2024-11-25 12:44:21.846003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.200 [2024-11-25 12:44:21.846018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.200 [2024-11-25 12:44:21.859102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.200 [2024-11-25 12:44:21.859116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.200 [2024-11-25 12:44:21.872731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.200 [2024-11-25 12:44:21.872746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.200 [2024-11-25 12:44:21.884964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.200 [2024-11-25 12:44:21.884979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.200 [2024-11-25 12:44:21.898375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.200 [2024-11-25 12:44:21.898390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.200 [2024-11-25 12:44:21.911058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.200 [2024-11-25 12:44:21.911073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.200 [2024-11-25 12:44:21.923457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.200 [2024-11-25 12:44:21.923472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.200 [2024-11-25 12:44:21.936793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.200 [2024-11-25 12:44:21.936808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.200 [2024-11-25 12:44:21.950635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.200 [2024-11-25 12:44:21.950649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.200 [2024-11-25 12:44:21.963691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.200 [2024-11-25 12:44:21.963706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.201 [2024-11-25 12:44:21.976904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.201 [2024-11-25 12:44:21.976920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.201 [2024-11-25 12:44:21.990040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.201 [2024-11-25 12:44:21.990055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.201 [2024-11-25 12:44:22.003802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.201 [2024-11-25 12:44:22.003817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.201 [2024-11-25 12:44:22.016618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.201 [2024-11-25 12:44:22.016633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.201 [2024-11-25 12:44:22.029217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.201 [2024-11-25 12:44:22.029232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.201 [2024-11-25 12:44:22.041483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.201 [2024-11-25 12:44:22.041498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.201 [2024-11-25 12:44:22.054304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.201 [2024-11-25 12:44:22.054323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.201 [2024-11-25 12:44:22.067820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.201 [2024-11-25 12:44:22.067835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.201 [2024-11-25 12:44:22.080769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.201 [2024-11-25 12:44:22.080784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.201 [2024-11-25 12:44:22.094365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.201 [2024-11-25 12:44:22.094380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.107854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.107873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.121274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.121289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.134647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.134662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.147595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.147609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.160976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.160990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.173591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.173605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.186962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.186977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.199790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.199804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.213104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.213118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.226456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.226470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.239871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.239885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.253249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.253263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 19213.33 IOPS, 150.10 MiB/s [2024-11-25T11:44:22.364Z] [2024-11-25 12:44:22.266241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.266256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.278635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.278649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.291555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.291569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.304588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.304606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.317642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.317657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.330096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.330111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.342741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.342756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.461 [2024-11-25 12:44:22.356076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.461 [2024-11-25 12:44:22.356091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.369743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.369758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.382266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.382280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.395639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.395654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.408046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.408060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.420925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.420939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.434325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.434340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.448011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.448025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.461616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.461630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.474411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.474425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.487338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.487352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.500474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.500488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.514105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.514119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.527293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.527308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.540608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.540623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.554352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.554371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.568287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.568302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.581528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.581542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.595023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.595038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.608123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.608137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.722 [2024-11-25 12:44:22.621580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.722 [2024-11-25 12:44:22.621595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.634992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.983 [2024-11-25 12:44:22.635007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.648324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.983 [2024-11-25 12:44:22.648338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.661577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.983 [2024-11-25 12:44:22.661592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.675297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.983 [2024-11-25 12:44:22.675312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.688318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.983 [2024-11-25 12:44:22.688333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.700961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.983 [2024-11-25 12:44:22.700976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.713885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.983 [2024-11-25 12:44:22.713899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.727073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.983 [2024-11-25 12:44:22.727088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.740566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.983 [2024-11-25 12:44:22.740581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.753441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.983 [2024-11-25 12:44:22.753455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.765743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.983 [2024-11-25 12:44:22.765757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.983 [2024-11-25 12:44:22.778619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.984 [2024-11-25 12:44:22.778634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.984 [2024-11-25 12:44:22.791917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.984 [2024-11-25 12:44:22.791932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.984 [2024-11-25 12:44:22.804292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.984 [2024-11-25 12:44:22.804307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.984 [2024-11-25 12:44:22.817416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.984 [2024-11-25 12:44:22.817430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.984 [2024-11-25 12:44:22.830511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.984 [2024-11-25 12:44:22.830526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.984 [2024-11-25 12:44:22.843434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.984 [2024-11-25 12:44:22.843449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.984 [2024-11-25 12:44:22.855982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.984 [2024-11-25 12:44:22.855996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.984 [2024-11-25 12:44:22.868697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.984 [2024-11-25 12:44:22.868712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.984 [2024-11-25 12:44:22.881581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.984 [2024-11-25 12:44:22.881595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:22.893891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:22.893906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:22.906373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:22.906387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:22.918638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:22.918652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:22.931330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:22.931344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:22.944428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:22.944443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:22.957893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:22.957908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:22.971494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:22.971509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:22.984290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:22.984304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:22.997323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:22.997338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:23.010665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:23.010679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:23.024056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:23.024071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:23.037015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:23.037030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:23.049702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:23.049717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:23.062298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:23.062312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:23.075449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:23.075464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:23.088389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:23.088404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:23.100786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:23.100802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:23.114024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:23.114040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:23.127316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:23.127330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.244 [2024-11-25 12:44:23.140174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.244 [2024-11-25 12:44:23.140189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.153539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.153555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.166777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.166792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.180285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.180300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.193942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.193958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.206712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.206727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.219995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.220009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.232816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.232831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.245441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.245456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.258797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.258813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 19211.75 IOPS, 150.09 MiB/s [2024-11-25T11:44:23.408Z] [2024-11-25 12:44:23.271646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.271661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.285255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.285274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.298048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.298062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.311172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.311187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.324534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.324549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.338130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.338145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.351532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.351547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.364902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.364917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.378229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.378244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.391486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.391501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.505 [2024-11-25 12:44:23.403820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.505 [2024-11-25 12:44:23.403835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.417076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.417091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.430503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.430519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.443857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.443877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.457165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.457180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.470575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.470590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.483693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.483708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.496611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.496626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.509304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.509318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.522473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.522488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.535560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.535580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.548933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.548948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.561535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.561550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.574092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.574107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.587359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.587374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.600261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.600276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.613828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.613843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.627380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.627394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.640937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.640952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.654022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.654036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.766 [2024-11-25 12:44:23.667439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.766 [2024-11-25 12:44:23.667454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.680874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.680889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.694537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.694552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.707160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.707175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.720032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.720047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.733221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.733235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.746728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.746743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.759876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.759890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.773233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.773248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.786835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.786857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.799636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.799650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.813074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.813089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.826413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.826428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.839615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.839630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.853059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.853073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.866811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.866826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.880252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.880266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.893832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.893847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.906825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.906840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-11-25 12:44:23.919583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-11-25 12:44:23.919597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:23.932305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:23.932320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:23.945378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:23.945393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:23.958622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:23.958637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:23.971570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:23.971584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:23.984981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:23.984995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:23.998428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:23.998443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.011109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.011123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.023750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.023764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.035988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.036006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.048818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.048833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.061220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.061234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.074521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.074536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.088194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.088209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.100586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.100601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.114138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.114152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.127261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.127276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.140773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.140787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.154650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.154667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.166880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.166895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.288 [2024-11-25 12:44:24.180204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.288 [2024-11-25 12:44:24.180219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.193405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.193420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.206456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.206471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.219949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.219965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.233036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.233051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.246369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.246384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.260306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.260320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 19237.80 IOPS, 150.30 MiB/s [2024-11-25T11:44:24.452Z] [2024-11-25 12:44:24.269810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.269824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 00:09:44.549 Latency(us) 00:09:44.549 [2024-11-25T11:44:24.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.549 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:44.549 Nvme1n1 : 5.01 19238.83 150.30 0.00 0.00 6646.15 2662.40 15728.64 00:09:44.549 [2024-11-25T11:44:24.452Z] =================================================================================================================== 00:09:44.549 [2024-11-25T11:44:24.452Z] Total : 19238.83 150.30 0.00 0.00 6646.15 2662.40 15728.64 00:09:44.549 [2024-11-25 12:44:24.281838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.281850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.293872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.293883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.305901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.305913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.317929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.317940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.329955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.329965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.341987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.341997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.354015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.354024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.366050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.366062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 [2024-11-25 12:44:24.378076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.549 [2024-11-25 12:44:24.378084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (467701) - No such process 00:09:44.549 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 467701 00:09:44.549 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.549 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.549 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.549 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.549 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:44.549 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.549 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.549 delay0 00:09:44.549 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.549 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:44.549 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.550 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.550 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.550 12:44:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:44.810 [2024-11-25 12:44:24.491176] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:52.948 Initializing NVMe Controllers 00:09:52.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:52.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:52.948 Initialization complete. Launching workers. 00:09:52.948 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 260, failed: 22944 00:09:52.948 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23120, failed to submit 84 00:09:52.948 success 23008, unsuccessful 112, failed 0 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.948 rmmod nvme_tcp 00:09:52.948 rmmod nvme_fabrics 00:09:52.948 rmmod nvme_keyring 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 465337 ']' 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 465337 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 465337 ']' 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 465337 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 465337 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 465337' 00:09:52.948 killing process with pid 465337 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 465337 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 465337 00:09:52.948 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.949 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.949 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.949 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:52.949 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.949 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:52.949 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.949 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.949 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.949 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.949 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.949 12:44:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.038 12:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.038 00:09:54.038 real 0m35.109s 00:09:54.038 user 0m45.439s 00:09:54.038 sys 0m12.095s 00:09:54.038 12:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.038 12:44:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.038 ************************************ 00:09:54.038 END TEST nvmf_zcopy 00:09:54.038 ************************************ 00:09:54.365 12:44:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:54.365 12:44:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.365 12:44:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.365 12:44:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.365 ************************************ 00:09:54.365 START TEST nvmf_nmic 00:09:54.365 ************************************ 00:09:54.365 12:44:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:54.365 * Looking for test storage... 00:09:54.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.365 --rc genhtml_branch_coverage=1 00:09:54.365 --rc genhtml_function_coverage=1 00:09:54.365 --rc genhtml_legend=1 00:09:54.365 --rc geninfo_all_blocks=1 00:09:54.365 --rc geninfo_unexecuted_blocks=1 00:09:54.365 00:09:54.365 ' 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.365 --rc genhtml_branch_coverage=1 00:09:54.365 --rc genhtml_function_coverage=1 00:09:54.365 --rc genhtml_legend=1 00:09:54.365 --rc geninfo_all_blocks=1 00:09:54.365 --rc geninfo_unexecuted_blocks=1 00:09:54.365 00:09:54.365 ' 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.365 --rc genhtml_branch_coverage=1 00:09:54.365 --rc genhtml_function_coverage=1 00:09:54.365 --rc genhtml_legend=1 00:09:54.365 --rc geninfo_all_blocks=1 00:09:54.365 --rc geninfo_unexecuted_blocks=1 00:09:54.365 00:09:54.365 ' 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.365 --rc genhtml_branch_coverage=1 00:09:54.365 --rc genhtml_function_coverage=1 00:09:54.365 --rc genhtml_legend=1 00:09:54.365 --rc geninfo_all_blocks=1 00:09:54.365 --rc geninfo_unexecuted_blocks=1 00:09:54.365 00:09:54.365 ' 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.365 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:54.366 12:44:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:02.505 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:02.505 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:02.505 Found net devices under 0000:31:00.0: cvl_0_0 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:02.505 Found net devices under 0000:31:00.1: cvl_0_1 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:02.505 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:02.506 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:02.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:10:02.768 00:10:02.768 --- 10.0.0.2 ping statistics --- 00:10:02.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.768 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:10:02.768 00:10:02.768 --- 10.0.0.1 ping statistics --- 00:10:02.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.768 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=474946 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 474946 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 474946 ']' 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.768 12:44:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.768 [2024-11-25 12:44:42.565003] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:10:02.768 [2024-11-25 12:44:42.565069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.768 [2024-11-25 12:44:42.656403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.028 [2024-11-25 12:44:42.699535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.028 [2024-11-25 12:44:42.699571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.028 [2024-11-25 12:44:42.699579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.028 [2024-11-25 12:44:42.699586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.028 [2024-11-25 12:44:42.699591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.028 [2024-11-25 12:44:42.701488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.028 [2024-11-25 12:44:42.701609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.028 [2024-11-25 12:44:42.701763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.028 [2024-11-25 12:44:42.701764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 [2024-11-25 12:44:43.424304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 Malloc0 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.599 [2024-11-25 12:44:43.491181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.599 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.600 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:03.600 test case1: single bdev can't be used in multiple subsystems 00:10:03.600 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:03.600 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.600 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.860 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.860 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:03.860 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.860 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.860 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.861 [2024-11-25 12:44:43.527115] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:03.861 [2024-11-25 12:44:43.527135] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:03.861 [2024-11-25 12:44:43.527143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.861 request: 00:10:03.861 { 00:10:03.861 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:03.861 "namespace": { 00:10:03.861 "bdev_name": "Malloc0", 00:10:03.861 "no_auto_visible": false 00:10:03.861 }, 00:10:03.861 "method": "nvmf_subsystem_add_ns", 00:10:03.861 "req_id": 1 00:10:03.861 } 00:10:03.861 Got JSON-RPC error response 00:10:03.861 response: 00:10:03.861 { 00:10:03.861 "code": -32602, 00:10:03.861 "message": "Invalid parameters" 00:10:03.861 } 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:03.861 Adding namespace failed - expected result. 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:03.861 test case2: host connect to nvmf target in multiple paths 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.861 [2024-11-25 12:44:43.539273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.861 12:44:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.244 12:44:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:07.155 12:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:07.155 12:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:07.155 12:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.155 12:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:07.155 12:44:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:09.065 12:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:09.065 12:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:09.065 12:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.065 12:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:09.065 12:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.065 12:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:09.065 12:44:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:09.065 [global] 00:10:09.065 thread=1 00:10:09.065 invalidate=1 00:10:09.065 rw=write 00:10:09.065 time_based=1 00:10:09.065 runtime=1 00:10:09.065 ioengine=libaio 00:10:09.065 direct=1 00:10:09.065 bs=4096 00:10:09.065 iodepth=1 00:10:09.065 norandommap=0 00:10:09.065 numjobs=1 00:10:09.065 00:10:09.065 verify_dump=1 00:10:09.065 verify_backlog=512 00:10:09.065 verify_state_save=0 00:10:09.065 do_verify=1 00:10:09.065 verify=crc32c-intel 00:10:09.065 [job0] 00:10:09.065 filename=/dev/nvme0n1 00:10:09.065 Could not set queue depth (nvme0n1) 00:10:09.326 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.326 fio-3.35 00:10:09.326 Starting 1 thread 00:10:10.711 00:10:10.711 job0: (groupid=0, jobs=1): err= 0: pid=476323: Mon Nov 25 12:44:50 2024 00:10:10.711 read: IOPS=18, BW=74.1KiB/s (75.9kB/s)(76.0KiB/1026msec) 00:10:10.711 slat (nsec): min=25957, max=27296, avg=26343.74, stdev=301.06 00:10:10.711 clat (usec): min=40905, max=41888, avg=41073.93, stdev=285.59 00:10:10.711 lat (usec): min=40931, max=41914, avg=41100.27, stdev=285.57 00:10:10.711 clat percentiles (usec): 00:10:10.711 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:10.711 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:10.711 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:10:10.711 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:10.711 | 99.99th=[41681] 00:10:10.711 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:10.711 slat (usec): min=9, max=27803, avg=79.77, stdev=1227.70 00:10:10.711 clat (usec): min=183, max=706, avg=392.87, stdev=101.27 00:10:10.711 lat (usec): min=193, max=28241, avg=472.64, stdev=1234.26 00:10:10.711 clat percentiles (usec): 00:10:10.711 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 247], 20.00th=[ 318], 00:10:10.711 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 392], 60.00th=[ 420], 00:10:10.711 | 70.00th=[ 437], 80.00th=[ 469], 90.00th=[ 545], 95.00th=[ 570], 00:10:10.711 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[ 709], 99.95th=[ 709], 00:10:10.711 | 99.99th=[ 709] 00:10:10.711 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.711 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.711 lat (usec) : 250=10.36%, 500=69.30%, 750=16.76% 00:10:10.711 lat (msec) : 50=3.58% 00:10:10.711 cpu : usr=0.59%, sys=1.27%, ctx=535, majf=0, minf=1 00:10:10.711 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.711 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.711 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.711 00:10:10.711 Run status group 0 (all jobs): 00:10:10.711 READ: bw=74.1KiB/s (75.9kB/s), 74.1KiB/s-74.1KiB/s (75.9kB/s-75.9kB/s), io=76.0KiB (77.8kB), run=1026-1026msec 00:10:10.711 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:10:10.711 00:10:10.711 Disk stats (read/write): 00:10:10.711 nvme0n1: ios=40/512, merge=0/0, ticks=1584/204, in_queue=1788, util=98.80% 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.711 rmmod nvme_tcp 00:10:10.711 rmmod nvme_fabrics 00:10:10.711 rmmod nvme_keyring 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:10.711 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 474946 ']' 00:10:10.712 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 474946 00:10:10.712 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 474946 ']' 00:10:10.712 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 474946 00:10:10.712 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:10.712 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.712 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 474946 00:10:10.712 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.712 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.712 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 474946' 00:10:10.712 killing process with pid 474946 00:10:10.712 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 474946 00:10:10.712 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 474946 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.973 12:44:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.525 12:44:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.525 00:10:13.525 real 0m18.829s 00:10:13.525 user 0m49.565s 00:10:13.525 sys 0m7.110s 00:10:13.525 12:44:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.525 12:44:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.525 ************************************ 00:10:13.525 END TEST nvmf_nmic 00:10:13.525 ************************************ 00:10:13.525 12:44:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:13.525 12:44:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.525 12:44:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.525 12:44:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.525 ************************************ 00:10:13.525 START TEST nvmf_fio_target 00:10:13.525 ************************************ 00:10:13.525 12:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:13.525 * Looking for test storage... 00:10:13.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.525 12:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:13.525 12:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:13.525 12:44:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:13.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.525 --rc genhtml_branch_coverage=1 00:10:13.525 --rc genhtml_function_coverage=1 00:10:13.525 --rc genhtml_legend=1 00:10:13.525 --rc geninfo_all_blocks=1 00:10:13.525 --rc geninfo_unexecuted_blocks=1 00:10:13.525 00:10:13.525 ' 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:13.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.525 --rc genhtml_branch_coverage=1 00:10:13.525 --rc genhtml_function_coverage=1 00:10:13.525 --rc genhtml_legend=1 00:10:13.525 --rc geninfo_all_blocks=1 00:10:13.525 --rc geninfo_unexecuted_blocks=1 00:10:13.525 00:10:13.525 ' 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:13.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.525 --rc genhtml_branch_coverage=1 00:10:13.525 --rc genhtml_function_coverage=1 00:10:13.525 --rc genhtml_legend=1 00:10:13.525 --rc geninfo_all_blocks=1 00:10:13.525 --rc geninfo_unexecuted_blocks=1 00:10:13.525 00:10:13.525 ' 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:13.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.525 --rc genhtml_branch_coverage=1 00:10:13.525 --rc genhtml_function_coverage=1 00:10:13.525 --rc genhtml_legend=1 00:10:13.525 --rc geninfo_all_blocks=1 00:10:13.525 --rc geninfo_unexecuted_blocks=1 00:10:13.525 00:10:13.525 ' 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.525 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.526 12:44:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:21.667 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:21.667 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:21.667 Found net devices under 0000:31:00.0: cvl_0_0 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:21.667 Found net devices under 0000:31:00.1: cvl_0_1 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.667 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:10:21.668 00:10:21.668 --- 10.0.0.2 ping statistics --- 00:10:21.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.668 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:10:21.668 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:10:21.930 00:10:21.930 --- 10.0.0.1 ping statistics --- 00:10:21.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.930 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=481615 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 481615 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 481615 ']' 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.930 12:45:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.930 [2024-11-25 12:45:01.692547] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:10:21.930 [2024-11-25 12:45:01.692614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.930 [2024-11-25 12:45:01.788035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.930 [2024-11-25 12:45:01.829451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.930 [2024-11-25 12:45:01.829489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.930 [2024-11-25 12:45:01.829498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.930 [2024-11-25 12:45:01.829505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.930 [2024-11-25 12:45:01.829511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.930 [2024-11-25 12:45:01.831390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.930 [2024-11-25 12:45:01.831509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.930 [2024-11-25 12:45:01.831651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.930 [2024-11-25 12:45:01.831652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.883 12:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.884 12:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:22.884 12:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:22.884 12:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.884 12:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.884 12:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.884 12:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.884 [2024-11-25 12:45:02.683064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.884 12:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.149 12:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:23.149 12:45:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.410 12:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:23.410 12:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.670 12:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:23.670 12:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.670 12:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:23.670 12:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:23.930 12:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.191 12:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:24.191 12:45:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.452 12:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:24.452 12:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.452 12:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:24.452 12:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:24.714 12:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:24.975 12:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:24.975 12:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.236 12:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:25.236 12:45:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.236 12:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.495 [2024-11-25 12:45:05.242231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.495 12:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:25.755 12:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:25.755 12:45:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:27.665 12:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:27.665 12:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:27.665 12:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:27.665 12:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:27.665 12:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:27.665 12:45:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:29.598 12:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:29.598 12:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:29.598 12:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:29.598 12:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:29.598 12:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:29.598 12:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:29.598 12:45:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:29.598 [global] 00:10:29.598 thread=1 00:10:29.598 invalidate=1 00:10:29.598 rw=write 00:10:29.598 time_based=1 00:10:29.598 runtime=1 00:10:29.598 ioengine=libaio 00:10:29.598 direct=1 00:10:29.598 bs=4096 00:10:29.598 iodepth=1 00:10:29.598 norandommap=0 00:10:29.598 numjobs=1 00:10:29.598 00:10:29.598 verify_dump=1 00:10:29.598 verify_backlog=512 00:10:29.598 verify_state_save=0 00:10:29.598 do_verify=1 00:10:29.598 verify=crc32c-intel 00:10:29.598 [job0] 00:10:29.598 filename=/dev/nvme0n1 00:10:29.598 [job1] 00:10:29.598 filename=/dev/nvme0n2 00:10:29.598 [job2] 00:10:29.598 filename=/dev/nvme0n3 00:10:29.598 [job3] 00:10:29.598 filename=/dev/nvme0n4 00:10:29.598 Could not set queue depth (nvme0n1) 00:10:29.598 Could not set queue depth (nvme0n2) 00:10:29.598 Could not set queue depth (nvme0n3) 00:10:29.598 Could not set queue depth (nvme0n4) 00:10:29.859 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.859 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.859 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.859 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.859 fio-3.35 00:10:29.859 Starting 4 threads 00:10:31.260 00:10:31.260 job0: (groupid=0, jobs=1): err= 0: pid=483661: Mon Nov 25 12:45:10 2024 00:10:31.260 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:31.260 slat (nsec): min=6597, max=64867, avg=26221.76, stdev=4376.31 00:10:31.260 clat (usec): min=679, max=3944, avg=990.03, stdev=205.35 00:10:31.260 lat (usec): min=687, max=3972, avg=1016.25, stdev=205.94 00:10:31.260 clat percentiles (usec): 00:10:31.260 | 1.00th=[ 734], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 906], 00:10:31.260 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:10:31.260 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:10:31.260 | 99.00th=[ 1172], 99.50th=[ 1270], 99.90th=[ 3949], 99.95th=[ 3949], 00:10:31.260 | 99.99th=[ 3949] 00:10:31.260 write: IOPS=716, BW=2865KiB/s (2934kB/s)(2868KiB/1001msec); 0 zone resets 00:10:31.260 slat (nsec): min=9087, max=71559, avg=30334.52, stdev=9676.55 00:10:31.260 clat (usec): min=263, max=1235, avg=625.61, stdev=122.68 00:10:31.260 lat (usec): min=297, max=1250, avg=655.95, stdev=125.80 00:10:31.260 clat percentiles (usec): 00:10:31.260 | 1.00th=[ 347], 5.00th=[ 416], 10.00th=[ 469], 20.00th=[ 529], 00:10:31.260 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:10:31.260 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 799], 00:10:31.260 | 99.00th=[ 938], 99.50th=[ 996], 99.90th=[ 1237], 99.95th=[ 1237], 00:10:31.260 | 99.99th=[ 1237] 00:10:31.260 bw ( KiB/s): min= 4096, max= 4096, per=46.68%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.260 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.260 lat (usec) : 500=8.71%, 750=41.90%, 1000=28.64% 00:10:31.261 lat (msec) : 2=20.59%, 4=0.16% 00:10:31.261 cpu : usr=3.10%, sys=4.20%, ctx=1229, majf=0, minf=1 00:10:31.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.261 issued rwts: total=512,717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.261 job1: (groupid=0, jobs=1): err= 0: pid=483662: Mon Nov 25 12:45:10 2024 00:10:31.261 read: IOPS=15, BW=63.9KiB/s (65.4kB/s)(64.0KiB/1002msec) 00:10:31.261 slat (nsec): min=11558, max=32726, avg=25721.06, stdev=4310.07 00:10:31.261 clat (usec): min=41458, max=42029, avg=41937.07, stdev=132.70 00:10:31.261 lat (usec): min=41485, max=42051, avg=41962.79, stdev=132.47 00:10:31.261 clat percentiles (usec): 00:10:31.261 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:31.261 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:31.261 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:31.261 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:31.261 | 99.99th=[42206] 00:10:31.261 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:31.261 slat (nsec): min=9974, max=73363, avg=30364.79, stdev=10384.95 00:10:31.261 clat (usec): min=240, max=1150, avg=606.50, stdev=115.42 00:10:31.261 lat (usec): min=254, max=1189, avg=636.86, stdev=119.06 00:10:31.261 clat percentiles (usec): 00:10:31.261 | 1.00th=[ 347], 5.00th=[ 396], 10.00th=[ 461], 20.00th=[ 506], 00:10:31.261 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:10:31.261 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 775], 00:10:31.261 | 99.00th=[ 832], 99.50th=[ 873], 99.90th=[ 1156], 99.95th=[ 1156], 00:10:31.261 | 99.99th=[ 1156] 00:10:31.261 bw ( KiB/s): min= 4096, max= 4096, per=46.68%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.261 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.261 lat (usec) : 250=0.38%, 500=17.42%, 750=70.45%, 1000=8.33% 00:10:31.261 lat (msec) : 2=0.38%, 50=3.03% 00:10:31.261 cpu : usr=0.50%, sys=1.70%, ctx=529, majf=0, minf=1 00:10:31.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.261 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.261 job2: (groupid=0, jobs=1): err= 0: pid=483669: Mon Nov 25 12:45:10 2024 00:10:31.261 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1017msec) 00:10:31.261 slat (nsec): min=15178, max=29205, avg=27583.88, stdev=3228.73 00:10:31.261 clat (usec): min=40866, max=42056, avg=41576.52, stdev=478.84 00:10:31.261 lat (usec): min=40895, max=42084, avg=41604.11, stdev=478.62 00:10:31.261 clat percentiles (usec): 00:10:31.261 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:31.261 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:31.261 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:31.261 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:31.261 | 99.99th=[42206] 00:10:31.261 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:10:31.261 slat (nsec): min=5954, max=55691, avg=29634.07, stdev=11812.04 00:10:31.261 clat (usec): min=132, max=1252, avg=568.14, stdev=178.68 00:10:31.261 lat (usec): min=145, max=1264, avg=597.78, stdev=185.49 00:10:31.261 clat percentiles (usec): 00:10:31.261 | 1.00th=[ 215], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 383], 00:10:31.261 | 30.00th=[ 474], 40.00th=[ 545], 50.00th=[ 603], 60.00th=[ 652], 00:10:31.261 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 807], 00:10:31.261 | 99.00th=[ 898], 99.50th=[ 922], 99.90th=[ 1254], 99.95th=[ 1254], 00:10:31.261 | 99.99th=[ 1254] 00:10:31.261 bw ( KiB/s): min= 4096, max= 4096, per=46.68%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.261 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.261 lat (usec) : 250=1.70%, 500=30.43%, 750=51.04%, 1000=13.42% 00:10:31.261 lat (msec) : 2=0.19%, 50=3.21% 00:10:31.261 cpu : usr=1.08%, sys=1.77%, ctx=531, majf=0, minf=1 00:10:31.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.261 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.261 job3: (groupid=0, jobs=1): err= 0: pid=483674: Mon Nov 25 12:45:10 2024 00:10:31.261 read: IOPS=19, BW=77.9KiB/s (79.8kB/s)(80.0KiB/1027msec) 00:10:31.261 slat (nsec): min=26869, max=29078, avg=28355.75, stdev=465.42 00:10:31.261 clat (usec): min=574, max=42501, avg=33728.35, stdev=16934.75 00:10:31.261 lat (usec): min=603, max=42528, avg=33756.70, stdev=16934.60 00:10:31.261 clat percentiles (usec): 00:10:31.261 | 1.00th=[ 578], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 865], 00:10:31.261 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:31.261 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:31.261 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:31.261 | 99.99th=[42730] 00:10:31.261 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:31.261 slat (usec): min=9, max=42579, avg=151.45, stdev=2045.04 00:10:31.261 clat (usec): min=169, max=1139, avg=526.76, stdev=141.86 00:10:31.261 lat (usec): min=179, max=43369, avg=678.20, stdev=2067.39 00:10:31.261 clat percentiles (usec): 00:10:31.261 | 1.00th=[ 247], 5.00th=[ 293], 10.00th=[ 355], 20.00th=[ 392], 00:10:31.261 | 30.00th=[ 441], 40.00th=[ 490], 50.00th=[ 537], 60.00th=[ 570], 00:10:31.261 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 734], 00:10:31.261 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 1139], 99.95th=[ 1139], 00:10:31.261 | 99.99th=[ 1139] 00:10:31.261 bw ( KiB/s): min= 4096, max= 4096, per=46.68%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.261 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.261 lat (usec) : 250=1.13%, 500=39.29%, 750=51.88%, 1000=4.51% 00:10:31.261 lat (msec) : 2=0.19%, 50=3.01% 00:10:31.261 cpu : usr=0.88%, sys=2.05%, ctx=537, majf=0, minf=1 00:10:31.261 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.261 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.261 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.261 00:10:31.261 Run status group 0 (all jobs): 00:10:31.261 READ: bw=2201KiB/s (2253kB/s), 63.9KiB/s-2046KiB/s (65.4kB/s-2095kB/s), io=2260KiB (2314kB), run=1001-1027msec 00:10:31.261 WRITE: bw=8775KiB/s (8986kB/s), 1994KiB/s-2865KiB/s (2042kB/s-2934kB/s), io=9012KiB (9228kB), run=1001-1027msec 00:10:31.261 00:10:31.261 Disk stats (read/write): 00:10:31.261 nvme0n1: ios=522/512, merge=0/0, ticks=571/257, in_queue=828, util=90.57% 00:10:31.261 nvme0n2: ios=61/512, merge=0/0, ticks=1193/299, in_queue=1492, util=96.63% 00:10:31.261 nvme0n3: ios=69/512, merge=0/0, ticks=962/236, in_queue=1198, util=97.03% 00:10:31.261 nvme0n4: ios=72/512, merge=0/0, ticks=1392/214, in_queue=1606, util=96.57% 00:10:31.261 12:45:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:31.261 [global] 00:10:31.261 thread=1 00:10:31.261 invalidate=1 00:10:31.261 rw=randwrite 00:10:31.261 time_based=1 00:10:31.261 runtime=1 00:10:31.261 ioengine=libaio 00:10:31.261 direct=1 00:10:31.261 bs=4096 00:10:31.261 iodepth=1 00:10:31.261 norandommap=0 00:10:31.261 numjobs=1 00:10:31.261 00:10:31.261 verify_dump=1 00:10:31.261 verify_backlog=512 00:10:31.261 verify_state_save=0 00:10:31.261 do_verify=1 00:10:31.261 verify=crc32c-intel 00:10:31.261 [job0] 00:10:31.261 filename=/dev/nvme0n1 00:10:31.261 [job1] 00:10:31.261 filename=/dev/nvme0n2 00:10:31.261 [job2] 00:10:31.261 filename=/dev/nvme0n3 00:10:31.261 [job3] 00:10:31.261 filename=/dev/nvme0n4 00:10:31.261 Could not set queue depth (nvme0n1) 00:10:31.261 Could not set queue depth (nvme0n2) 00:10:31.261 Could not set queue depth (nvme0n3) 00:10:31.261 Could not set queue depth (nvme0n4) 00:10:31.525 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.525 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.525 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.525 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.525 fio-3.35 00:10:31.525 Starting 4 threads 00:10:32.925 00:10:32.925 job0: (groupid=0, jobs=1): err= 0: pid=484340: Mon Nov 25 12:45:12 2024 00:10:32.925 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:32.925 slat (nsec): min=24712, max=61293, avg=25632.95, stdev=2011.60 00:10:32.925 clat (usec): min=646, max=1228, avg=980.22, stdev=77.48 00:10:32.925 lat (usec): min=671, max=1253, avg=1005.85, stdev=77.39 00:10:32.925 clat percentiles (usec): 00:10:32.925 | 1.00th=[ 791], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 922], 00:10:32.925 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 988], 60.00th=[ 1004], 00:10:32.925 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1090], 00:10:32.925 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:32.925 | 99.99th=[ 1221] 00:10:32.925 write: IOPS=769, BW=3077KiB/s (3151kB/s)(3080KiB/1001msec); 0 zone resets 00:10:32.925 slat (nsec): min=9367, max=67625, avg=28862.03, stdev=8612.95 00:10:32.925 clat (usec): min=200, max=986, avg=588.45, stdev=109.71 00:10:32.925 lat (usec): min=210, max=1018, avg=617.31, stdev=112.81 00:10:32.925 clat percentiles (usec): 00:10:32.925 | 1.00th=[ 330], 5.00th=[ 412], 10.00th=[ 445], 20.00th=[ 506], 00:10:32.925 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 619], 00:10:32.925 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 766], 00:10:32.925 | 99.00th=[ 857], 99.50th=[ 873], 99.90th=[ 988], 99.95th=[ 988], 00:10:32.925 | 99.99th=[ 988] 00:10:32.925 bw ( KiB/s): min= 4096, max= 4096, per=45.47%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.925 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.925 lat (usec) : 250=0.08%, 500=11.62%, 750=44.54%, 1000=26.60% 00:10:32.925 lat (msec) : 2=17.16% 00:10:32.925 cpu : usr=2.30%, sys=3.30%, ctx=1283, majf=0, minf=1 00:10:32.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.925 issued rwts: total=512,770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.925 job1: (groupid=0, jobs=1): err= 0: pid=484341: Mon Nov 25 12:45:12 2024 00:10:32.925 read: IOPS=15, BW=62.5KiB/s (64.0kB/s)(64.0KiB/1024msec) 00:10:32.925 slat (nsec): min=26097, max=26848, avg=26388.75, stdev=219.64 00:10:32.925 clat (usec): min=40771, max=41986, avg=41367.14, stdev=453.74 00:10:32.925 lat (usec): min=40798, max=42012, avg=41393.53, stdev=453.69 00:10:32.925 clat percentiles (usec): 00:10:32.925 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:32.925 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:32.925 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:32.925 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:32.925 | 99.99th=[42206] 00:10:32.925 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:10:32.925 slat (nsec): min=8988, max=51651, avg=30400.50, stdev=7984.49 00:10:32.925 clat (usec): min=255, max=993, avg=668.08, stdev=129.94 00:10:32.925 lat (usec): min=266, max=1025, avg=698.49, stdev=132.39 00:10:32.925 clat percentiles (usec): 00:10:32.925 | 1.00th=[ 371], 5.00th=[ 449], 10.00th=[ 482], 20.00th=[ 553], 00:10:32.925 | 30.00th=[ 603], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 709], 00:10:32.925 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 832], 95.00th=[ 881], 00:10:32.925 | 99.00th=[ 955], 99.50th=[ 963], 99.90th=[ 996], 99.95th=[ 996], 00:10:32.925 | 99.99th=[ 996] 00:10:32.925 bw ( KiB/s): min= 4096, max= 4096, per=45.47%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.925 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.925 lat (usec) : 500=11.55%, 750=59.28%, 1000=26.14% 00:10:32.925 lat (msec) : 50=3.03% 00:10:32.925 cpu : usr=1.56%, sys=1.47%, ctx=528, majf=0, minf=1 00:10:32.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.925 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.925 job2: (groupid=0, jobs=1): err= 0: pid=484342: Mon Nov 25 12:45:12 2024 00:10:32.925 read: IOPS=16, BW=66.7KiB/s (68.3kB/s)(68.0KiB/1019msec) 00:10:32.925 slat (nsec): min=25613, max=26462, avg=25842.00, stdev=247.83 00:10:32.925 clat (usec): min=40914, max=42057, avg=41496.52, stdev=494.87 00:10:32.925 lat (usec): min=40939, max=42083, avg=41522.36, stdev=494.93 00:10:32.925 clat percentiles (usec): 00:10:32.925 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:32.925 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:32.925 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:32.925 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:32.925 | 99.99th=[42206] 00:10:32.925 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:10:32.925 slat (nsec): min=9544, max=63309, avg=30046.28, stdev=7581.48 00:10:32.925 clat (usec): min=225, max=867, avg=573.59, stdev=127.54 00:10:32.925 lat (usec): min=242, max=899, avg=603.63, stdev=129.50 00:10:32.925 clat percentiles (usec): 00:10:32.925 | 1.00th=[ 258], 5.00th=[ 363], 10.00th=[ 392], 20.00th=[ 461], 00:10:32.925 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 586], 60.00th=[ 611], 00:10:32.925 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 783], 00:10:32.925 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 865], 99.95th=[ 865], 00:10:32.925 | 99.99th=[ 865] 00:10:32.925 bw ( KiB/s): min= 4096, max= 4096, per=45.47%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.925 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.925 lat (usec) : 250=0.76%, 500=27.60%, 750=60.49%, 1000=7.94% 00:10:32.925 lat (msec) : 50=3.21% 00:10:32.925 cpu : usr=0.98%, sys=1.28%, ctx=529, majf=0, minf=2 00:10:32.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.925 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.925 job3: (groupid=0, jobs=1): err= 0: pid=484343: Mon Nov 25 12:45:12 2024 00:10:32.925 read: IOPS=17, BW=71.3KiB/s (73.0kB/s)(72.0KiB/1010msec) 00:10:32.925 slat (nsec): min=26260, max=28372, avg=26573.94, stdev=474.22 00:10:32.925 clat (usec): min=1086, max=42057, avg=37304.17, stdev=13167.10 00:10:32.925 lat (usec): min=1112, max=42084, avg=37330.74, stdev=13167.19 00:10:32.925 clat percentiles (usec): 00:10:32.925 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[ 1156], 20.00th=[41157], 00:10:32.925 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:32.925 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:32.926 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:32.926 | 99.99th=[42206] 00:10:32.926 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:10:32.926 slat (nsec): min=9946, max=53330, avg=31948.91, stdev=8407.83 00:10:32.926 clat (usec): min=237, max=951, avg=619.73, stdev=123.22 00:10:32.926 lat (usec): min=249, max=985, avg=651.68, stdev=126.22 00:10:32.926 clat percentiles (usec): 00:10:32.926 | 1.00th=[ 289], 5.00th=[ 396], 10.00th=[ 457], 20.00th=[ 515], 00:10:32.926 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:10:32.926 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 799], 00:10:32.926 | 99.00th=[ 857], 99.50th=[ 906], 99.90th=[ 955], 99.95th=[ 955], 00:10:32.926 | 99.99th=[ 955] 00:10:32.926 bw ( KiB/s): min= 4096, max= 4096, per=45.47%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.926 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.926 lat (usec) : 250=0.38%, 500=16.42%, 750=65.66%, 1000=14.15% 00:10:32.926 lat (msec) : 2=0.38%, 50=3.02% 00:10:32.926 cpu : usr=0.79%, sys=1.59%, ctx=532, majf=0, minf=1 00:10:32.926 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.926 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.926 00:10:32.926 Run status group 0 (all jobs): 00:10:32.926 READ: bw=2199KiB/s (2252kB/s), 62.5KiB/s-2046KiB/s (64.0kB/s-2095kB/s), io=2252KiB (2306kB), run=1001-1024msec 00:10:32.926 WRITE: bw=9008KiB/s (9224kB/s), 2000KiB/s-3077KiB/s (2048kB/s-3151kB/s), io=9224KiB (9445kB), run=1001-1024msec 00:10:32.926 00:10:32.926 Disk stats (read/write): 00:10:32.926 nvme0n1: ios=549/512, merge=0/0, ticks=533/299, in_queue=832, util=86.57% 00:10:32.926 nvme0n2: ios=48/512, merge=0/0, ticks=787/272, in_queue=1059, util=95.92% 00:10:32.926 nvme0n3: ios=12/512, merge=0/0, ticks=501/283, in_queue=784, util=88.49% 00:10:32.926 nvme0n4: ios=36/512, merge=0/0, ticks=1406/297, in_queue=1703, util=96.79% 00:10:32.926 12:45:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:32.926 [global] 00:10:32.926 thread=1 00:10:32.926 invalidate=1 00:10:32.926 rw=write 00:10:32.926 time_based=1 00:10:32.926 runtime=1 00:10:32.926 ioengine=libaio 00:10:32.926 direct=1 00:10:32.926 bs=4096 00:10:32.926 iodepth=128 00:10:32.926 norandommap=0 00:10:32.926 numjobs=1 00:10:32.926 00:10:32.926 verify_dump=1 00:10:32.926 verify_backlog=512 00:10:32.926 verify_state_save=0 00:10:32.926 do_verify=1 00:10:32.926 verify=crc32c-intel 00:10:32.926 [job0] 00:10:32.926 filename=/dev/nvme0n1 00:10:32.926 [job1] 00:10:32.926 filename=/dev/nvme0n2 00:10:32.926 [job2] 00:10:32.926 filename=/dev/nvme0n3 00:10:32.926 [job3] 00:10:32.926 filename=/dev/nvme0n4 00:10:32.926 Could not set queue depth (nvme0n1) 00:10:32.926 Could not set queue depth (nvme0n2) 00:10:32.926 Could not set queue depth (nvme0n3) 00:10:32.926 Could not set queue depth (nvme0n4) 00:10:33.187 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.187 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.187 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.187 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.187 fio-3.35 00:10:33.187 Starting 4 threads 00:10:34.590 00:10:34.590 job0: (groupid=0, jobs=1): err= 0: pid=484865: Mon Nov 25 12:45:14 2024 00:10:34.590 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:34.590 slat (nsec): min=927, max=19931k, avg=101597.71, stdev=767217.48 00:10:34.590 clat (usec): min=2941, max=65688, avg=13823.60, stdev=13089.87 00:10:34.590 lat (usec): min=2947, max=65694, avg=13925.20, stdev=13159.10 00:10:34.590 clat percentiles (usec): 00:10:34.590 | 1.00th=[ 3294], 5.00th=[ 5407], 10.00th=[ 6128], 20.00th=[ 7177], 00:10:34.590 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9503], 00:10:34.590 | 70.00th=[11600], 80.00th=[13829], 90.00th=[31589], 95.00th=[48497], 00:10:34.590 | 99.00th=[63701], 99.50th=[65799], 99.90th=[65799], 99.95th=[65799], 00:10:34.590 | 99.99th=[65799] 00:10:34.590 write: IOPS=4732, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1003msec); 0 zone resets 00:10:34.590 slat (nsec): min=1621, max=14184k, avg=105713.89, stdev=765897.58 00:10:34.590 clat (usec): min=679, max=64291, avg=13163.49, stdev=12030.25 00:10:34.590 lat (usec): min=688, max=64316, avg=13269.21, stdev=12110.58 00:10:34.590 clat percentiles (usec): 00:10:34.590 | 1.00th=[ 2573], 5.00th=[ 4359], 10.00th=[ 5604], 20.00th=[ 5997], 00:10:34.590 | 30.00th=[ 6194], 40.00th=[ 6718], 50.00th=[ 7177], 60.00th=[ 8455], 00:10:34.590 | 70.00th=[12256], 80.00th=[19530], 90.00th=[32113], 95.00th=[40109], 00:10:34.590 | 99.00th=[53740], 99.50th=[58983], 99.90th=[63701], 99.95th=[63701], 00:10:34.590 | 99.99th=[64226] 00:10:34.590 bw ( KiB/s): min= 9576, max=27384, per=22.85%, avg=18480.00, stdev=12592.16, samples=2 00:10:34.590 iops : min= 2394, max= 6846, avg=4620.00, stdev=3148.04, samples=2 00:10:34.590 lat (usec) : 750=0.03%, 1000=0.09% 00:10:34.590 lat (msec) : 2=0.36%, 4=3.03%, 10=61.67%, 20=18.42%, 50=12.80% 00:10:34.590 lat (msec) : 100=3.61% 00:10:34.590 cpu : usr=2.99%, sys=3.69%, ctx=519, majf=0, minf=1 00:10:34.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:34.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.590 issued rwts: total=4608,4747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.590 job1: (groupid=0, jobs=1): err= 0: pid=484866: Mon Nov 25 12:45:14 2024 00:10:34.590 read: IOPS=6111, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1004msec) 00:10:34.590 slat (nsec): min=953, max=9909.4k, avg=73973.72, stdev=522167.41 00:10:34.590 clat (usec): min=2811, max=26189, avg=9471.17, stdev=3216.73 00:10:34.590 lat (usec): min=3720, max=26191, avg=9545.15, stdev=3250.80 00:10:34.590 clat percentiles (usec): 00:10:34.590 | 1.00th=[ 4228], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 7177], 00:10:34.590 | 30.00th=[ 7504], 40.00th=[ 7898], 50.00th=[ 8717], 60.00th=[ 9241], 00:10:34.590 | 70.00th=[10159], 80.00th=[11469], 90.00th=[13698], 95.00th=[16450], 00:10:34.590 | 99.00th=[21103], 99.50th=[23987], 99.90th=[24511], 99.95th=[26084], 00:10:34.590 | 99.99th=[26084] 00:10:34.590 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:10:34.590 slat (nsec): min=1654, max=19751k, avg=83067.64, stdev=563099.26 00:10:34.590 clat (usec): min=2189, max=53994, avg=10556.68, stdev=7188.85 00:10:34.590 lat (usec): min=2469, max=53997, avg=10639.75, stdev=7246.88 00:10:34.590 clat percentiles (usec): 00:10:34.590 | 1.00th=[ 3589], 5.00th=[ 4424], 10.00th=[ 5342], 20.00th=[ 5997], 00:10:34.590 | 30.00th=[ 6652], 40.00th=[ 7570], 50.00th=[ 8356], 60.00th=[10159], 00:10:34.590 | 70.00th=[12125], 80.00th=[13042], 90.00th=[16057], 95.00th=[24511], 00:10:34.590 | 99.00th=[47973], 99.50th=[49546], 99.90th=[53740], 99.95th=[53740], 00:10:34.590 | 99.99th=[53740] 00:10:34.590 bw ( KiB/s): min=22864, max=26288, per=30.38%, avg=24576.00, stdev=2421.13, samples=2 00:10:34.590 iops : min= 5716, max= 6572, avg=6144.00, stdev=605.28, samples=2 00:10:34.590 lat (msec) : 4=1.51%, 10=62.43%, 20=32.13%, 50=3.70%, 100=0.23% 00:10:34.590 cpu : usr=3.89%, sys=6.98%, ctx=475, majf=0, minf=1 00:10:34.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:34.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.590 issued rwts: total=6136,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.590 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.590 job2: (groupid=0, jobs=1): err= 0: pid=484867: Mon Nov 25 12:45:14 2024 00:10:34.590 read: IOPS=4111, BW=16.1MiB/s (16.8MB/s)(16.8MiB/1045msec) 00:10:34.590 slat (nsec): min=981, max=19660k, avg=103039.60, stdev=838385.56 00:10:34.590 clat (usec): min=3841, max=81030, avg=15150.48, stdev=12488.71 00:10:34.590 lat (msec): min=3, max=100, avg=15.25, stdev=12.58 00:10:34.590 clat percentiles (usec): 00:10:34.590 | 1.00th=[ 5342], 5.00th=[ 6980], 10.00th=[ 7570], 20.00th=[ 9110], 00:10:34.590 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:10:34.590 | 70.00th=[12518], 80.00th=[14222], 90.00th=[27657], 95.00th=[46400], 00:10:34.590 | 99.00th=[64226], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:10:34.590 | 99.99th=[81265] 00:10:34.590 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:10:34.590 slat (nsec): min=1701, max=10879k, avg=101758.76, stdev=720831.18 00:10:34.590 clat (usec): min=659, max=60561, avg=14483.09, stdev=8337.42 00:10:34.590 lat (usec): min=1140, max=60568, avg=14584.85, stdev=8399.03 00:10:34.590 clat percentiles (usec): 00:10:34.590 | 1.00th=[ 3851], 5.00th=[ 4948], 10.00th=[ 6652], 20.00th=[ 7832], 00:10:34.590 | 30.00th=[ 9241], 40.00th=[10290], 50.00th=[11076], 60.00th=[13435], 00:10:34.590 | 70.00th=[17695], 80.00th=[21890], 90.00th=[26084], 95.00th=[30540], 00:10:34.590 | 99.00th=[40633], 99.50th=[43254], 99.90th=[49546], 99.95th=[57934], 00:10:34.591 | 99.99th=[60556] 00:10:34.591 bw ( KiB/s): min=16384, max=20480, per=22.79%, avg=18432.00, stdev=2896.31, samples=2 00:10:34.591 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:34.591 lat (usec) : 750=0.01% 00:10:34.591 lat (msec) : 2=0.10%, 4=0.58%, 10=29.66%, 20=50.54%, 50=17.60% 00:10:34.591 lat (msec) : 100=1.50% 00:10:34.591 cpu : usr=3.45%, sys=4.79%, ctx=351, majf=0, minf=1 00:10:34.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:34.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.591 issued rwts: total=4296,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.591 job3: (groupid=0, jobs=1): err= 0: pid=484870: Mon Nov 25 12:45:14 2024 00:10:34.591 read: IOPS=5127, BW=20.0MiB/s (21.0MB/s)(20.2MiB/1009msec) 00:10:34.591 slat (nsec): min=1017, max=13591k, avg=81369.32, stdev=606540.49 00:10:34.591 clat (usec): min=3514, max=37864, avg=10214.75, stdev=4252.17 00:10:34.591 lat (usec): min=3523, max=37892, avg=10296.12, stdev=4306.66 00:10:34.591 clat percentiles (usec): 00:10:34.591 | 1.00th=[ 6128], 5.00th=[ 6521], 10.00th=[ 6783], 20.00th=[ 7439], 00:10:34.591 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9372], 00:10:34.591 | 70.00th=[10290], 80.00th=[11994], 90.00th=[15533], 95.00th=[21103], 00:10:34.591 | 99.00th=[25560], 99.50th=[27132], 99.90th=[28705], 99.95th=[28705], 00:10:34.591 | 99.99th=[38011] 00:10:34.591 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:10:34.591 slat (nsec): min=1737, max=7717.7k, avg=96243.94, stdev=559768.33 00:10:34.591 clat (usec): min=1252, max=72738, avg=13325.17, stdev=14572.41 00:10:34.591 lat (usec): min=1262, max=72748, avg=13421.41, stdev=14672.83 00:10:34.591 clat percentiles (usec): 00:10:34.591 | 1.00th=[ 3458], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5276], 00:10:34.591 | 30.00th=[ 6259], 40.00th=[ 7177], 50.00th=[ 7767], 60.00th=[ 8848], 00:10:34.591 | 70.00th=[10028], 80.00th=[12911], 90.00th=[33817], 95.00th=[55837], 00:10:34.591 | 99.00th=[64226], 99.50th=[67634], 99.90th=[72877], 99.95th=[72877], 00:10:34.591 | 99.99th=[72877] 00:10:34.591 bw ( KiB/s): min=11704, max=32760, per=27.49%, avg=22232.00, stdev=14888.84, samples=2 00:10:34.591 iops : min= 2926, max= 8190, avg=5558.00, stdev=3722.21, samples=2 00:10:34.591 lat (msec) : 2=0.05%, 4=1.67%, 10=67.42%, 20=19.91%, 50=7.47% 00:10:34.591 lat (msec) : 100=3.49% 00:10:34.591 cpu : usr=4.27%, sys=6.55%, ctx=390, majf=0, minf=1 00:10:34.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:34.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.591 issued rwts: total=5174,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.591 00:10:34.591 Run status group 0 (all jobs): 00:10:34.591 READ: bw=75.6MiB/s (79.2MB/s), 16.1MiB/s-23.9MiB/s (16.8MB/s-25.0MB/s), io=79.0MiB (82.8MB), run=1003-1045msec 00:10:34.591 WRITE: bw=79.0MiB/s (82.8MB/s), 17.2MiB/s-23.9MiB/s (18.1MB/s-25.1MB/s), io=82.5MiB (86.6MB), run=1003-1045msec 00:10:34.591 00:10:34.591 Disk stats (read/write): 00:10:34.591 nvme0n1: ios=3174/3584, merge=0/0, ticks=14246/15781, in_queue=30027, util=83.67% 00:10:34.591 nvme0n2: ios=5168/5350, merge=0/0, ticks=46003/49033, in_queue=95036, util=90.59% 00:10:34.591 nvme0n3: ios=3638/3902, merge=0/0, ticks=27805/31424, in_queue=59229, util=95.03% 00:10:34.591 nvme0n4: ios=4716/5120, merge=0/0, ticks=46167/55531, in_queue=101698, util=94.21% 00:10:34.591 12:45:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:34.591 [global] 00:10:34.591 thread=1 00:10:34.591 invalidate=1 00:10:34.591 rw=randwrite 00:10:34.591 time_based=1 00:10:34.591 runtime=1 00:10:34.591 ioengine=libaio 00:10:34.591 direct=1 00:10:34.591 bs=4096 00:10:34.591 iodepth=128 00:10:34.591 norandommap=0 00:10:34.591 numjobs=1 00:10:34.591 00:10:34.591 verify_dump=1 00:10:34.591 verify_backlog=512 00:10:34.591 verify_state_save=0 00:10:34.591 do_verify=1 00:10:34.591 verify=crc32c-intel 00:10:34.591 [job0] 00:10:34.591 filename=/dev/nvme0n1 00:10:34.591 [job1] 00:10:34.591 filename=/dev/nvme0n2 00:10:34.591 [job2] 00:10:34.591 filename=/dev/nvme0n3 00:10:34.591 [job3] 00:10:34.591 filename=/dev/nvme0n4 00:10:34.591 Could not set queue depth (nvme0n1) 00:10:34.591 Could not set queue depth (nvme0n2) 00:10:34.591 Could not set queue depth (nvme0n3) 00:10:34.591 Could not set queue depth (nvme0n4) 00:10:34.929 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.929 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.929 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.929 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.929 fio-3.35 00:10:34.929 Starting 4 threads 00:10:36.350 00:10:36.350 job0: (groupid=0, jobs=1): err= 0: pid=485389: Mon Nov 25 12:45:15 2024 00:10:36.350 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:10:36.350 slat (nsec): min=900, max=13494k, avg=74573.50, stdev=602592.86 00:10:36.350 clat (usec): min=3794, max=31772, avg=10543.98, stdev=4306.06 00:10:36.350 lat (usec): min=3800, max=31824, avg=10618.55, stdev=4349.71 00:10:36.350 clat percentiles (usec): 00:10:36.350 | 1.00th=[ 4113], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 7242], 00:10:36.350 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 8455], 60.00th=[10552], 00:10:36.350 | 70.00th=[12387], 80.00th=[13960], 90.00th=[17433], 95.00th=[19006], 00:10:36.350 | 99.00th=[22938], 99.50th=[22938], 99.90th=[23725], 99.95th=[24249], 00:10:36.350 | 99.99th=[31851] 00:10:36.350 write: IOPS=6508, BW=25.4MiB/s (26.7MB/s)(25.5MiB/1004msec); 0 zone resets 00:10:36.350 slat (nsec): min=1502, max=10913k, avg=63942.02, stdev=454071.38 00:10:36.350 clat (usec): min=1154, max=58645, avg=9598.46, stdev=6586.30 00:10:36.350 lat (usec): min=1161, max=58651, avg=9662.40, stdev=6620.58 00:10:36.350 clat percentiles (usec): 00:10:36.350 | 1.00th=[ 2040], 5.00th=[ 3949], 10.00th=[ 5211], 20.00th=[ 6652], 00:10:36.350 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 8586], 00:10:36.350 | 70.00th=[ 9896], 80.00th=[11994], 90.00th=[14091], 95.00th=[17695], 00:10:36.350 | 99.00th=[51643], 99.50th=[53216], 99.90th=[57934], 99.95th=[58459], 00:10:36.350 | 99.99th=[58459] 00:10:36.350 bw ( KiB/s): min=20528, max=30736, per=27.36%, avg=25632.00, stdev=7218.15, samples=2 00:10:36.350 iops : min= 5132, max= 7684, avg=6408.00, stdev=1804.54, samples=2 00:10:36.350 lat (msec) : 2=0.48%, 4=2.51%, 10=61.27%, 20=32.13%, 50=3.03% 00:10:36.350 lat (msec) : 100=0.58% 00:10:36.350 cpu : usr=5.08%, sys=5.78%, ctx=551, majf=0, minf=1 00:10:36.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:36.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.350 issued rwts: total=6144,6535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.350 job1: (groupid=0, jobs=1): err= 0: pid=485390: Mon Nov 25 12:45:15 2024 00:10:36.350 read: IOPS=5199, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1005msec) 00:10:36.350 slat (nsec): min=940, max=10211k, avg=78539.49, stdev=622523.19 00:10:36.350 clat (usec): min=1747, max=45657, avg=11033.71, stdev=6574.13 00:10:36.350 lat (usec): min=1758, max=45664, avg=11112.25, stdev=6612.49 00:10:36.350 clat percentiles (usec): 00:10:36.350 | 1.00th=[ 2704], 5.00th=[ 4883], 10.00th=[ 5800], 20.00th=[ 6456], 00:10:36.350 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 8291], 60.00th=[10028], 00:10:36.350 | 70.00th=[11469], 80.00th=[16450], 90.00th=[20317], 95.00th=[22676], 00:10:36.350 | 99.00th=[36439], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:10:36.350 | 99.99th=[45876] 00:10:36.350 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:10:36.350 slat (nsec): min=1576, max=23454k, avg=95823.14, stdev=666486.73 00:10:36.350 clat (usec): min=821, max=66178, avg=12361.56, stdev=11163.95 00:10:36.350 lat (usec): min=845, max=66186, avg=12457.39, stdev=11247.18 00:10:36.350 clat percentiles (usec): 00:10:36.350 | 1.00th=[ 1582], 5.00th=[ 3359], 10.00th=[ 4047], 20.00th=[ 5014], 00:10:36.350 | 30.00th=[ 5473], 40.00th=[ 6915], 50.00th=[ 8586], 60.00th=[10814], 00:10:36.350 | 70.00th=[13435], 80.00th=[16319], 90.00th=[23200], 95.00th=[38011], 00:10:36.350 | 99.00th=[55837], 99.50th=[62653], 99.90th=[66323], 99.95th=[66323], 00:10:36.350 | 99.99th=[66323] 00:10:36.350 bw ( KiB/s): min=16208, max=28672, per=23.96%, avg=22440.00, stdev=8813.38, samples=2 00:10:36.350 iops : min= 4052, max= 7168, avg=5610.00, stdev=2203.34, samples=2 00:10:36.350 lat (usec) : 1000=0.09% 00:10:36.350 lat (msec) : 2=0.72%, 4=5.85%, 10=51.71%, 20=29.20%, 50=11.38% 00:10:36.350 lat (msec) : 100=1.06% 00:10:36.350 cpu : usr=3.19%, sys=6.18%, ctx=461, majf=0, minf=1 00:10:36.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:36.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.350 issued rwts: total=5225,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.350 job2: (groupid=0, jobs=1): err= 0: pid=485391: Mon Nov 25 12:45:15 2024 00:10:36.350 read: IOPS=5215, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1006msec) 00:10:36.350 slat (nsec): min=956, max=21857k, avg=94082.07, stdev=823804.35 00:10:36.350 clat (usec): min=1298, max=59774, avg=12949.83, stdev=8792.64 00:10:36.350 lat (usec): min=1305, max=59788, avg=13043.92, stdev=8846.90 00:10:36.350 clat percentiles (usec): 00:10:36.350 | 1.00th=[ 2507], 5.00th=[ 6128], 10.00th=[ 6587], 20.00th=[ 7767], 00:10:36.350 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[10814], 00:10:36.350 | 70.00th=[11863], 80.00th=[15139], 90.00th=[26084], 95.00th=[35390], 00:10:36.350 | 99.00th=[43779], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:10:36.350 | 99.99th=[60031] 00:10:36.350 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:10:36.350 slat (nsec): min=1567, max=43160k, avg=81115.91, stdev=809426.13 00:10:36.350 clat (usec): min=791, max=78034, avg=10560.96, stdev=8961.25 00:10:36.350 lat (usec): min=806, max=82374, avg=10642.07, stdev=9030.65 00:10:36.350 clat percentiles (usec): 00:10:36.350 | 1.00th=[ 3720], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 7111], 00:10:36.350 | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[10028], 00:10:36.350 | 70.00th=[10552], 80.00th=[11338], 90.00th=[14353], 95.00th=[18744], 00:10:36.350 | 99.00th=[61080], 99.50th=[66323], 99.90th=[78119], 99.95th=[78119], 00:10:36.350 | 99.99th=[78119] 00:10:36.350 bw ( KiB/s): min=19448, max=25600, per=24.05%, avg=22524.00, stdev=4350.12, samples=2 00:10:36.350 iops : min= 4862, max= 6400, avg=5631.00, stdev=1087.53, samples=2 00:10:36.350 lat (usec) : 1000=0.04% 00:10:36.350 lat (msec) : 2=0.23%, 4=1.61%, 10=51.69%, 20=38.34%, 50=6.74% 00:10:36.350 lat (msec) : 100=1.36% 00:10:36.350 cpu : usr=3.38%, sys=5.87%, ctx=459, majf=0, minf=1 00:10:36.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:36.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.350 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.350 job3: (groupid=0, jobs=1): err= 0: pid=485392: Mon Nov 25 12:45:15 2024 00:10:36.350 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:10:36.350 slat (nsec): min=927, max=15213k, avg=79747.81, stdev=635898.91 00:10:36.350 clat (usec): min=1824, max=84862, avg=11729.04, stdev=8126.07 00:10:36.350 lat (usec): min=1832, max=84940, avg=11808.79, stdev=8157.75 00:10:36.350 clat percentiles (usec): 00:10:36.350 | 1.00th=[ 3621], 5.00th=[ 4146], 10.00th=[ 5407], 20.00th=[ 6915], 00:10:36.350 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8717], 60.00th=[10290], 00:10:36.351 | 70.00th=[12780], 80.00th=[15008], 90.00th=[20579], 95.00th=[28443], 00:10:36.351 | 99.00th=[42730], 99.50th=[43779], 99.90th=[84411], 99.95th=[84411], 00:10:36.351 | 99.99th=[84411] 00:10:36.351 write: IOPS=5742, BW=22.4MiB/s (23.5MB/s)(22.6MiB/1007msec); 0 zone resets 00:10:36.351 slat (nsec): min=1548, max=27264k, avg=84204.99, stdev=712179.37 00:10:36.351 clat (usec): min=1223, max=70164, avg=10663.23, stdev=8542.57 00:10:36.351 lat (usec): min=1232, max=70191, avg=10747.44, stdev=8606.96 00:10:36.351 clat percentiles (usec): 00:10:36.351 | 1.00th=[ 1893], 5.00th=[ 3720], 10.00th=[ 4359], 20.00th=[ 6915], 00:10:36.351 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 8291], 60.00th=[ 8717], 00:10:36.351 | 70.00th=[10814], 80.00th=[12780], 90.00th=[17433], 95.00th=[20317], 00:10:36.351 | 99.00th=[54264], 99.50th=[62129], 99.90th=[69731], 99.95th=[69731], 00:10:36.351 | 99.99th=[69731] 00:10:36.351 bw ( KiB/s): min=16576, max=28672, per=24.15%, avg=22624.00, stdev=8553.16, samples=2 00:10:36.351 iops : min= 4144, max= 7168, avg=5656.00, stdev=2138.29, samples=2 00:10:36.351 lat (msec) : 2=0.73%, 4=4.49%, 10=56.46%, 20=29.15%, 50=8.33% 00:10:36.351 lat (msec) : 100=0.83% 00:10:36.351 cpu : usr=3.28%, sys=6.66%, ctx=505, majf=0, minf=1 00:10:36.351 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:36.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.351 issued rwts: total=5632,5783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.351 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.351 00:10:36.351 Run status group 0 (all jobs): 00:10:36.351 READ: bw=86.3MiB/s (90.5MB/s), 20.3MiB/s-23.9MiB/s (21.3MB/s-25.1MB/s), io=86.9MiB (91.1MB), run=1004-1007msec 00:10:36.351 WRITE: bw=91.5MiB/s (95.9MB/s), 21.9MiB/s-25.4MiB/s (22.9MB/s-26.7MB/s), io=92.1MiB (96.6MB), run=1004-1007msec 00:10:36.351 00:10:36.351 Disk stats (read/write): 00:10:36.351 nvme0n1: ios=4658/5119, merge=0/0, ticks=48746/47166, in_queue=95912, util=86.57% 00:10:36.351 nvme0n2: ios=4886/5120, merge=0/0, ticks=43747/44890, in_queue=88637, util=96.02% 00:10:36.351 nvme0n3: ios=4002/4096, merge=0/0, ticks=38431/33218, in_queue=71649, util=96.09% 00:10:36.351 nvme0n4: ios=4871/5120, merge=0/0, ticks=35828/35509, in_queue=71337, util=88.02% 00:10:36.351 12:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:36.351 12:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=485724 00:10:36.351 12:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:36.351 12:45:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:36.351 [global] 00:10:36.351 thread=1 00:10:36.351 invalidate=1 00:10:36.351 rw=read 00:10:36.351 time_based=1 00:10:36.351 runtime=10 00:10:36.351 ioengine=libaio 00:10:36.351 direct=1 00:10:36.351 bs=4096 00:10:36.351 iodepth=1 00:10:36.351 norandommap=1 00:10:36.351 numjobs=1 00:10:36.351 00:10:36.351 [job0] 00:10:36.351 filename=/dev/nvme0n1 00:10:36.351 [job1] 00:10:36.351 filename=/dev/nvme0n2 00:10:36.351 [job2] 00:10:36.351 filename=/dev/nvme0n3 00:10:36.351 [job3] 00:10:36.351 filename=/dev/nvme0n4 00:10:36.351 Could not set queue depth (nvme0n1) 00:10:36.351 Could not set queue depth (nvme0n2) 00:10:36.351 Could not set queue depth (nvme0n3) 00:10:36.351 Could not set queue depth (nvme0n4) 00:10:36.616 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.616 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.616 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.616 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.616 fio-3.35 00:10:36.616 Starting 4 threads 00:10:39.166 12:45:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:39.166 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:10:39.166 fio: pid=485922, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.166 12:45:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:39.429 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=18546688, buflen=4096 00:10:39.429 fio: pid=485921, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.429 12:45:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.429 12:45:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:39.690 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1687552, buflen=4096 00:10:39.690 fio: pid=485918, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.690 12:45:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.690 12:45:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:39.951 12:45:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.951 12:45:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:39.951 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=315392, buflen=4096 00:10:39.951 fio: pid=485919, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:39.951 00:10:39.951 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=485918: Mon Nov 25 12:45:19 2024 00:10:39.951 read: IOPS=139, BW=558KiB/s (572kB/s)(1648KiB/2952msec) 00:10:39.951 slat (usec): min=2, max=12605, avg=70.66, stdev=710.86 00:10:39.951 clat (usec): min=339, max=41792, avg=7035.02, stdev=14694.98 00:10:39.951 lat (usec): min=347, max=41801, avg=7105.79, stdev=14694.18 00:10:39.951 clat percentiles (usec): 00:10:39.951 | 1.00th=[ 478], 5.00th=[ 523], 10.00th=[ 562], 20.00th=[ 635], 00:10:39.951 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 717], 60.00th=[ 734], 00:10:39.951 | 70.00th=[ 758], 80.00th=[ 799], 90.00th=[41157], 95.00th=[41157], 00:10:39.951 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:39.951 | 99.99th=[41681] 00:10:39.951 bw ( KiB/s): min= 96, max= 368, per=2.36%, avg=152.00, stdev=120.80, samples=5 00:10:39.951 iops : min= 24, max= 92, avg=38.00, stdev=30.20, samples=5 00:10:39.951 lat (usec) : 500=3.15%, 750=65.62%, 1000=15.01% 00:10:39.951 lat (msec) : 2=0.24%, 50=15.74% 00:10:39.951 cpu : usr=0.07%, sys=0.41%, ctx=418, majf=0, minf=2 00:10:39.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.951 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.951 issued rwts: total=413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.951 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=485919: Mon Nov 25 12:45:19 2024 00:10:39.951 read: IOPS=24, BW=97.8KiB/s (100kB/s)(308KiB/3150msec) 00:10:39.951 slat (usec): min=9, max=11680, avg=361.53, stdev=1754.03 00:10:39.951 clat (usec): min=854, max=41984, avg=40526.60, stdev=4586.82 00:10:39.951 lat (usec): min=880, max=52989, avg=40799.92, stdev=4892.23 00:10:39.951 clat percentiles (usec): 00:10:39.951 | 1.00th=[ 857], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:39.951 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:39.951 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:39.951 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:39.951 | 99.99th=[42206] 00:10:39.951 bw ( KiB/s): min= 96, max= 104, per=1.50%, avg=97.33, stdev= 3.27, samples=6 00:10:39.951 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:10:39.951 lat (usec) : 1000=1.28% 00:10:39.951 lat (msec) : 50=97.44% 00:10:39.951 cpu : usr=0.00%, sys=0.29%, ctx=82, majf=0, minf=1 00:10:39.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.951 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.951 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.951 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=485921: Mon Nov 25 12:45:19 2024 00:10:39.951 read: IOPS=1643, BW=6574KiB/s (6732kB/s)(17.7MiB/2755msec) 00:10:39.951 slat (usec): min=5, max=7521, avg=26.06, stdev=148.51 00:10:39.951 clat (usec): min=152, max=1721, avg=574.90, stdev=88.99 00:10:39.951 lat (usec): min=159, max=7933, avg=600.96, stdev=171.79 00:10:39.951 clat percentiles (usec): 00:10:39.951 | 1.00th=[ 322], 5.00th=[ 420], 10.00th=[ 465], 20.00th=[ 502], 00:10:39.952 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 594], 60.00th=[ 611], 00:10:39.952 | 70.00th=[ 619], 80.00th=[ 635], 90.00th=[ 660], 95.00th=[ 676], 00:10:39.952 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[ 922], 99.95th=[ 996], 00:10:39.952 | 99.99th=[ 1729] 00:10:39.952 bw ( KiB/s): min= 6624, max= 6760, per=100.00%, avg=6670.40, stdev=54.67, samples=5 00:10:39.952 iops : min= 1656, max= 1690, avg=1667.60, stdev=13.67, samples=5 00:10:39.952 lat (usec) : 250=0.40%, 500=19.61%, 750=78.49%, 1000=1.44% 00:10:39.952 lat (msec) : 2=0.04% 00:10:39.952 cpu : usr=1.49%, sys=4.47%, ctx=4531, majf=0, minf=2 00:10:39.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.952 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.952 issued rwts: total=4529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.952 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=485922: Mon Nov 25 12:45:19 2024 00:10:39.952 read: IOPS=24, BW=97.1KiB/s (99.4kB/s)(252KiB/2595msec) 00:10:39.952 slat (nsec): min=26323, max=34977, avg=26950.92, stdev=1054.46 00:10:39.952 clat (usec): min=720, max=42166, avg=40771.93, stdev=5146.88 00:10:39.952 lat (usec): min=755, max=42193, avg=40798.87, stdev=5145.85 00:10:39.952 clat percentiles (usec): 00:10:39.952 | 1.00th=[ 717], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:39.952 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:39.952 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:39.952 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:39.952 | 99.99th=[42206] 00:10:39.952 bw ( KiB/s): min= 96, max= 104, per=1.50%, avg=97.60, stdev= 3.58, samples=5 00:10:39.952 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:39.952 lat (usec) : 750=1.56% 00:10:39.952 lat (msec) : 50=96.88% 00:10:39.952 cpu : usr=0.12%, sys=0.00%, ctx=65, majf=0, minf=2 00:10:39.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.952 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.952 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.952 00:10:39.952 Run status group 0 (all jobs): 00:10:39.952 READ: bw=6451KiB/s (6606kB/s), 97.1KiB/s-6574KiB/s (99.4kB/s-6732kB/s), io=19.8MiB (20.8MB), run=2595-3150msec 00:10:39.952 00:10:39.952 Disk stats (read/write): 00:10:39.952 nvme0n1: ios=309/0, merge=0/0, ticks=3522/0, in_queue=3522, util=98.40% 00:10:39.952 nvme0n2: ios=75/0, merge=0/0, ticks=3040/0, in_queue=3040, util=95.11% 00:10:39.952 nvme0n3: ios=4308/0, merge=0/0, ticks=2404/0, in_queue=2404, util=96.03% 00:10:39.952 nvme0n4: ios=105/0, merge=0/0, ticks=3475/0, in_queue=3475, util=98.73% 00:10:39.952 12:45:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.952 12:45:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:40.213 12:45:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.213 12:45:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:40.474 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.474 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:40.474 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.474 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:40.735 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 485724 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:40.736 nvmf hotplug test: fio failed as expected 00:10:40.736 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.998 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:40.998 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:40.998 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:40.998 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:40.998 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:40.998 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.998 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:40.998 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.998 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:40.998 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.998 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.998 rmmod nvme_tcp 00:10:40.998 rmmod nvme_fabrics 00:10:40.998 rmmod nvme_keyring 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 481615 ']' 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 481615 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 481615 ']' 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 481615 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 481615 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 481615' 00:10:41.259 killing process with pid 481615 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 481615 00:10:41.259 12:45:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 481615 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.259 12:45:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.806 00:10:43.806 real 0m30.298s 00:10:43.806 user 2m35.851s 00:10:43.806 sys 0m10.061s 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.806 ************************************ 00:10:43.806 END TEST nvmf_fio_target 00:10:43.806 ************************************ 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.806 ************************************ 00:10:43.806 START TEST nvmf_bdevio 00:10:43.806 ************************************ 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:43.806 * Looking for test storage... 00:10:43.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.806 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.807 --rc genhtml_branch_coverage=1 00:10:43.807 --rc genhtml_function_coverage=1 00:10:43.807 --rc genhtml_legend=1 00:10:43.807 --rc geninfo_all_blocks=1 00:10:43.807 --rc geninfo_unexecuted_blocks=1 00:10:43.807 00:10:43.807 ' 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.807 --rc genhtml_branch_coverage=1 00:10:43.807 --rc genhtml_function_coverage=1 00:10:43.807 --rc genhtml_legend=1 00:10:43.807 --rc geninfo_all_blocks=1 00:10:43.807 --rc geninfo_unexecuted_blocks=1 00:10:43.807 00:10:43.807 ' 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.807 --rc genhtml_branch_coverage=1 00:10:43.807 --rc genhtml_function_coverage=1 00:10:43.807 --rc genhtml_legend=1 00:10:43.807 --rc geninfo_all_blocks=1 00:10:43.807 --rc geninfo_unexecuted_blocks=1 00:10:43.807 00:10:43.807 ' 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.807 --rc genhtml_branch_coverage=1 00:10:43.807 --rc genhtml_function_coverage=1 00:10:43.807 --rc genhtml_legend=1 00:10:43.807 --rc geninfo_all_blocks=1 00:10:43.807 --rc geninfo_unexecuted_blocks=1 00:10:43.807 00:10:43.807 ' 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.807 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.808 12:45:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.956 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:51.957 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:51.957 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:51.957 Found net devices under 0000:31:00.0: cvl_0_0 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:51.957 Found net devices under 0000:31:00.1: cvl_0_1 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.957 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:52.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.738 ms 00:10:52.219 00:10:52.219 --- 10.0.0.2 ping statistics --- 00:10:52.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.219 rtt min/avg/max/mdev = 0.738/0.738/0.738/0.000 ms 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:10:52.219 00:10:52.219 --- 10.0.0.1 ping statistics --- 00:10:52.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.219 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.219 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.220 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:52.220 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.220 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.220 12:45:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.220 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=491635 00:10:52.220 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 491635 00:10:52.220 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:52.220 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 491635 ']' 00:10:52.220 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.220 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.220 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.220 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.220 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.220 [2024-11-25 12:45:32.070258] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:10:52.220 [2024-11-25 12:45:32.070327] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.481 [2024-11-25 12:45:32.178967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.481 [2024-11-25 12:45:32.228896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.481 [2024-11-25 12:45:32.228948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.481 [2024-11-25 12:45:32.228957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.481 [2024-11-25 12:45:32.228964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.481 [2024-11-25 12:45:32.228970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.481 [2024-11-25 12:45:32.230966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:52.481 [2024-11-25 12:45:32.231244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:52.481 [2024-11-25 12:45:32.231424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:52.481 [2024-11-25 12:45:32.231428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.054 [2024-11-25 12:45:32.933914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.054 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.316 Malloc0 00:10:53.316 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.316 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:53.316 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.316 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.316 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.316 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.316 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.316 12:45:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.316 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.316 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.316 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.316 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.316 [2024-11-25 12:45:33.010616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.316 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.316 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:53.316 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:53.316 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:53.316 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:53.316 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:53.316 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:53.316 { 00:10:53.316 "params": { 00:10:53.317 "name": "Nvme$subsystem", 00:10:53.317 "trtype": "$TEST_TRANSPORT", 00:10:53.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.317 "adrfam": "ipv4", 00:10:53.317 "trsvcid": "$NVMF_PORT", 00:10:53.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.317 "hdgst": ${hdgst:-false}, 00:10:53.317 "ddgst": ${ddgst:-false} 00:10:53.317 }, 00:10:53.317 "method": "bdev_nvme_attach_controller" 00:10:53.317 } 00:10:53.317 EOF 00:10:53.317 )") 00:10:53.317 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:53.317 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:53.317 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:53.317 12:45:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:53.317 "params": { 00:10:53.317 "name": "Nvme1", 00:10:53.317 "trtype": "tcp", 00:10:53.317 "traddr": "10.0.0.2", 00:10:53.317 "adrfam": "ipv4", 00:10:53.317 "trsvcid": "4420", 00:10:53.317 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.317 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.317 "hdgst": false, 00:10:53.317 "ddgst": false 00:10:53.317 }, 00:10:53.317 "method": "bdev_nvme_attach_controller" 00:10:53.317 }' 00:10:53.317 [2024-11-25 12:45:33.079707] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:10:53.317 [2024-11-25 12:45:33.079780] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491947 ] 00:10:53.317 [2024-11-25 12:45:33.165196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.317 [2024-11-25 12:45:33.209139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.317 [2024-11-25 12:45:33.209313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.317 [2024-11-25 12:45:33.209318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.577 I/O targets: 00:10:53.577 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:53.577 00:10:53.577 00:10:53.577 CUnit - A unit testing framework for C - Version 2.1-3 00:10:53.577 http://cunit.sourceforge.net/ 00:10:53.577 00:10:53.577 00:10:53.577 Suite: bdevio tests on: Nvme1n1 00:10:53.839 Test: blockdev write read block ...passed 00:10:53.839 Test: blockdev write zeroes read block ...passed 00:10:53.839 Test: blockdev write zeroes read no split ...passed 00:10:53.839 Test: blockdev write zeroes read split ...passed 00:10:53.839 Test: blockdev write zeroes read split partial ...passed 00:10:53.839 Test: blockdev reset ...[2024-11-25 12:45:33.603208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:53.839 [2024-11-25 12:45:33.603271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1803530 (9): Bad file descriptor 00:10:53.839 [2024-11-25 12:45:33.712630] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:53.839 passed 00:10:53.839 Test: blockdev write read 8 blocks ...passed 00:10:53.839 Test: blockdev write read size > 128k ...passed 00:10:53.839 Test: blockdev write read invalid size ...passed 00:10:54.100 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:54.100 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:54.100 Test: blockdev write read max offset ...passed 00:10:54.100 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:54.100 Test: blockdev writev readv 8 blocks ...passed 00:10:54.100 Test: blockdev writev readv 30 x 1block ...passed 00:10:54.100 Test: blockdev writev readv block ...passed 00:10:54.100 Test: blockdev writev readv size > 128k ...passed 00:10:54.100 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:54.100 Test: blockdev comparev and writev ...[2024-11-25 12:45:33.928793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.100 [2024-11-25 12:45:33.928818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:54.100 [2024-11-25 12:45:33.928829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.100 [2024-11-25 12:45:33.928835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:54.100 [2024-11-25 12:45:33.929153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.100 [2024-11-25 12:45:33.929162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:54.100 [2024-11-25 12:45:33.929171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.100 [2024-11-25 12:45:33.929177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:54.100 [2024-11-25 12:45:33.929498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.100 [2024-11-25 12:45:33.929506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:54.100 [2024-11-25 12:45:33.929516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.100 [2024-11-25 12:45:33.929525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:54.100 [2024-11-25 12:45:33.929794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.100 [2024-11-25 12:45:33.929802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:54.100 [2024-11-25 12:45:33.929811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.100 [2024-11-25 12:45:33.929817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:54.100 passed 00:10:54.367 Test: blockdev nvme passthru rw ...passed 00:10:54.367 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:45:34.012299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.367 [2024-11-25 12:45:34.012309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:54.367 [2024-11-25 12:45:34.012550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.367 [2024-11-25 12:45:34.012557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:54.367 [2024-11-25 12:45:34.012762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.367 [2024-11-25 12:45:34.012778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:54.367 [2024-11-25 12:45:34.013009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.367 [2024-11-25 12:45:34.013017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:54.367 passed 00:10:54.367 Test: blockdev nvme admin passthru ...passed 00:10:54.367 Test: blockdev copy ...passed 00:10:54.367 00:10:54.367 Run Summary: Type Total Ran Passed Failed Inactive 00:10:54.367 suites 1 1 n/a 0 0 00:10:54.367 tests 23 23 23 0 0 00:10:54.367 asserts 152 152 152 0 n/a 00:10:54.367 00:10:54.367 Elapsed time = 1.196 seconds 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.367 rmmod nvme_tcp 00:10:54.367 rmmod nvme_fabrics 00:10:54.367 rmmod nvme_keyring 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 491635 ']' 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 491635 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 491635 ']' 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 491635 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.367 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 491635 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 491635' 00:10:54.666 killing process with pid 491635 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 491635 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 491635 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.666 12:45:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.620 12:45:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:56.620 00:10:56.620 real 0m13.261s 00:10:56.620 user 0m13.595s 00:10:56.620 sys 0m6.956s 00:10:56.620 12:45:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.620 12:45:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.620 ************************************ 00:10:56.620 END TEST nvmf_bdevio 00:10:56.620 ************************************ 00:10:56.881 12:45:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:56.881 00:10:56.881 real 5m15.520s 00:10:56.881 user 11m46.565s 00:10:56.881 sys 1m57.412s 00:10:56.881 12:45:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.881 12:45:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:56.881 ************************************ 00:10:56.881 END TEST nvmf_target_core 00:10:56.881 ************************************ 00:10:56.881 12:45:36 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:56.881 12:45:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:56.881 12:45:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.881 12:45:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:56.881 ************************************ 00:10:56.881 START TEST nvmf_target_extra 00:10:56.881 ************************************ 00:10:56.881 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:56.881 * Looking for test storage... 00:10:56.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:56.881 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:56.881 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:56.881 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:57.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.144 --rc genhtml_branch_coverage=1 00:10:57.144 --rc genhtml_function_coverage=1 00:10:57.144 --rc genhtml_legend=1 00:10:57.144 --rc geninfo_all_blocks=1 00:10:57.144 --rc geninfo_unexecuted_blocks=1 00:10:57.144 00:10:57.144 ' 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:57.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.144 --rc genhtml_branch_coverage=1 00:10:57.144 --rc genhtml_function_coverage=1 00:10:57.144 --rc genhtml_legend=1 00:10:57.144 --rc geninfo_all_blocks=1 00:10:57.144 --rc geninfo_unexecuted_blocks=1 00:10:57.144 00:10:57.144 ' 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:57.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.144 --rc genhtml_branch_coverage=1 00:10:57.144 --rc genhtml_function_coverage=1 00:10:57.144 --rc genhtml_legend=1 00:10:57.144 --rc geninfo_all_blocks=1 00:10:57.144 --rc geninfo_unexecuted_blocks=1 00:10:57.144 00:10:57.144 ' 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:57.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.144 --rc genhtml_branch_coverage=1 00:10:57.144 --rc genhtml_function_coverage=1 00:10:57.144 --rc genhtml_legend=1 00:10:57.144 --rc geninfo_all_blocks=1 00:10:57.144 --rc geninfo_unexecuted_blocks=1 00:10:57.144 00:10:57.144 ' 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.144 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.145 ************************************ 00:10:57.145 START TEST nvmf_example 00:10:57.145 ************************************ 00:10:57.145 12:45:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:57.145 * Looking for test storage... 00:10:57.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.145 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:57.145 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:57.145 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:57.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.408 --rc genhtml_branch_coverage=1 00:10:57.408 --rc genhtml_function_coverage=1 00:10:57.408 --rc genhtml_legend=1 00:10:57.408 --rc geninfo_all_blocks=1 00:10:57.408 --rc geninfo_unexecuted_blocks=1 00:10:57.408 00:10:57.408 ' 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:57.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.408 --rc genhtml_branch_coverage=1 00:10:57.408 --rc genhtml_function_coverage=1 00:10:57.408 --rc genhtml_legend=1 00:10:57.408 --rc geninfo_all_blocks=1 00:10:57.408 --rc geninfo_unexecuted_blocks=1 00:10:57.408 00:10:57.408 ' 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:57.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.408 --rc genhtml_branch_coverage=1 00:10:57.408 --rc genhtml_function_coverage=1 00:10:57.408 --rc genhtml_legend=1 00:10:57.408 --rc geninfo_all_blocks=1 00:10:57.408 --rc geninfo_unexecuted_blocks=1 00:10:57.408 00:10:57.408 ' 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:57.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.408 --rc genhtml_branch_coverage=1 00:10:57.408 --rc genhtml_function_coverage=1 00:10:57.408 --rc genhtml_legend=1 00:10:57.408 --rc geninfo_all_blocks=1 00:10:57.408 --rc geninfo_unexecuted_blocks=1 00:10:57.408 00:10:57.408 ' 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.408 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.409 12:45:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:05.560 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:05.560 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:05.560 Found net devices under 0000:31:00.0: cvl_0_0 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:05.560 Found net devices under 0000:31:00.1: cvl_0_1 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:05.560 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:05.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:11:05.561 00:11:05.561 --- 10.0.0.2 ping statistics --- 00:11:05.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.561 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:11:05.561 00:11:05.561 --- 10.0.0.1 ping statistics --- 00:11:05.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.561 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=497081 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 497081 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 497081 ']' 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.561 12:45:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:06.506 12:45:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:18.743 Initializing NVMe Controllers 00:11:18.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:18.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:18.743 Initialization complete. Launching workers. 00:11:18.743 ======================================================== 00:11:18.743 Latency(us) 00:11:18.743 Device Information : IOPS MiB/s Average min max 00:11:18.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17588.12 68.70 3638.32 862.17 16562.59 00:11:18.743 ======================================================== 00:11:18.743 Total : 17588.12 68.70 3638.32 862.17 16562.59 00:11:18.743 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.743 rmmod nvme_tcp 00:11:18.743 rmmod nvme_fabrics 00:11:18.743 rmmod nvme_keyring 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 497081 ']' 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 497081 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 497081 ']' 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 497081 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 497081 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 497081' 00:11:18.743 killing process with pid 497081 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 497081 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 497081 00:11:18.743 nvmf threads initialize successfully 00:11:18.743 bdev subsystem init successfully 00:11:18.743 created a nvmf target service 00:11:18.743 create targets's poll groups done 00:11:18.743 all subsystems of target started 00:11:18.743 nvmf target is running 00:11:18.743 all subsystems of target stopped 00:11:18.743 destroy targets's poll groups done 00:11:18.743 destroyed the nvmf target service 00:11:18.743 bdev subsystem finish successfully 00:11:18.743 nvmf threads destroy successfully 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.743 12:45:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.315 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.315 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:19.315 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.315 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.315 00:11:19.315 real 0m22.152s 00:11:19.315 user 0m47.038s 00:11:19.315 sys 0m7.367s 00:11:19.315 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.315 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.315 ************************************ 00:11:19.315 END TEST nvmf_example 00:11:19.315 ************************************ 00:11:19.315 12:45:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:19.315 12:45:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.315 12:45:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.315 12:45:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.315 ************************************ 00:11:19.315 START TEST nvmf_filesystem 00:11:19.315 ************************************ 00:11:19.315 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:19.580 * Looking for test storage... 00:11:19.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.580 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.581 --rc genhtml_branch_coverage=1 00:11:19.581 --rc genhtml_function_coverage=1 00:11:19.581 --rc genhtml_legend=1 00:11:19.581 --rc geninfo_all_blocks=1 00:11:19.581 --rc geninfo_unexecuted_blocks=1 00:11:19.581 00:11:19.581 ' 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.581 --rc genhtml_branch_coverage=1 00:11:19.581 --rc genhtml_function_coverage=1 00:11:19.581 --rc genhtml_legend=1 00:11:19.581 --rc geninfo_all_blocks=1 00:11:19.581 --rc geninfo_unexecuted_blocks=1 00:11:19.581 00:11:19.581 ' 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.581 --rc genhtml_branch_coverage=1 00:11:19.581 --rc genhtml_function_coverage=1 00:11:19.581 --rc genhtml_legend=1 00:11:19.581 --rc geninfo_all_blocks=1 00:11:19.581 --rc geninfo_unexecuted_blocks=1 00:11:19.581 00:11:19.581 ' 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.581 --rc genhtml_branch_coverage=1 00:11:19.581 --rc genhtml_function_coverage=1 00:11:19.581 --rc genhtml_legend=1 00:11:19.581 --rc geninfo_all_blocks=1 00:11:19.581 --rc geninfo_unexecuted_blocks=1 00:11:19.581 00:11:19.581 ' 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:19.581 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:19.582 #define SPDK_CONFIG_H 00:11:19.582 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:19.582 #define SPDK_CONFIG_APPS 1 00:11:19.582 #define SPDK_CONFIG_ARCH native 00:11:19.582 #undef SPDK_CONFIG_ASAN 00:11:19.582 #undef SPDK_CONFIG_AVAHI 00:11:19.582 #undef SPDK_CONFIG_CET 00:11:19.582 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:19.582 #define SPDK_CONFIG_COVERAGE 1 00:11:19.582 #define SPDK_CONFIG_CROSS_PREFIX 00:11:19.582 #undef SPDK_CONFIG_CRYPTO 00:11:19.582 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:19.582 #undef SPDK_CONFIG_CUSTOMOCF 00:11:19.582 #undef SPDK_CONFIG_DAOS 00:11:19.582 #define SPDK_CONFIG_DAOS_DIR 00:11:19.582 #define SPDK_CONFIG_DEBUG 1 00:11:19.582 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:19.582 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:19.582 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:19.582 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:19.582 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:19.582 #undef SPDK_CONFIG_DPDK_UADK 00:11:19.582 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:19.582 #define SPDK_CONFIG_EXAMPLES 1 00:11:19.582 #undef SPDK_CONFIG_FC 00:11:19.582 #define SPDK_CONFIG_FC_PATH 00:11:19.582 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:19.582 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:19.582 #define SPDK_CONFIG_FSDEV 1 00:11:19.582 #undef SPDK_CONFIG_FUSE 00:11:19.582 #undef SPDK_CONFIG_FUZZER 00:11:19.582 #define SPDK_CONFIG_FUZZER_LIB 00:11:19.582 #undef SPDK_CONFIG_GOLANG 00:11:19.582 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:19.582 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:19.582 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:19.582 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:19.582 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:19.582 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:19.582 #undef SPDK_CONFIG_HAVE_LZ4 00:11:19.582 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:19.582 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:19.582 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:19.582 #define SPDK_CONFIG_IDXD 1 00:11:19.582 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:19.582 #undef SPDK_CONFIG_IPSEC_MB 00:11:19.582 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:19.582 #define SPDK_CONFIG_ISAL 1 00:11:19.582 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:19.582 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:19.582 #define SPDK_CONFIG_LIBDIR 00:11:19.582 #undef SPDK_CONFIG_LTO 00:11:19.582 #define SPDK_CONFIG_MAX_LCORES 128 00:11:19.582 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:19.582 #define SPDK_CONFIG_NVME_CUSE 1 00:11:19.582 #undef SPDK_CONFIG_OCF 00:11:19.582 #define SPDK_CONFIG_OCF_PATH 00:11:19.582 #define SPDK_CONFIG_OPENSSL_PATH 00:11:19.582 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:19.582 #define SPDK_CONFIG_PGO_DIR 00:11:19.582 #undef SPDK_CONFIG_PGO_USE 00:11:19.582 #define SPDK_CONFIG_PREFIX /usr/local 00:11:19.582 #undef SPDK_CONFIG_RAID5F 00:11:19.582 #undef SPDK_CONFIG_RBD 00:11:19.582 #define SPDK_CONFIG_RDMA 1 00:11:19.582 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:19.582 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:19.582 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:19.582 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:19.582 #define SPDK_CONFIG_SHARED 1 00:11:19.582 #undef SPDK_CONFIG_SMA 00:11:19.582 #define SPDK_CONFIG_TESTS 1 00:11:19.582 #undef SPDK_CONFIG_TSAN 00:11:19.582 #define SPDK_CONFIG_UBLK 1 00:11:19.582 #define SPDK_CONFIG_UBSAN 1 00:11:19.582 #undef SPDK_CONFIG_UNIT_TESTS 00:11:19.582 #undef SPDK_CONFIG_URING 00:11:19.582 #define SPDK_CONFIG_URING_PATH 00:11:19.582 #undef SPDK_CONFIG_URING_ZNS 00:11:19.582 #undef SPDK_CONFIG_USDT 00:11:19.582 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:19.582 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:19.582 #define SPDK_CONFIG_VFIO_USER 1 00:11:19.582 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:19.582 #define SPDK_CONFIG_VHOST 1 00:11:19.582 #define SPDK_CONFIG_VIRTIO 1 00:11:19.582 #undef SPDK_CONFIG_VTUNE 00:11:19.582 #define SPDK_CONFIG_VTUNE_DIR 00:11:19.582 #define SPDK_CONFIG_WERROR 1 00:11:19.582 #define SPDK_CONFIG_WPDK_DIR 00:11:19.582 #undef SPDK_CONFIG_XNVME 00:11:19.582 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.582 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:19.583 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:19.584 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 499872 ]] 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 499872 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Q9aFGJ 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Q9aFGJ/tests/target /tmp/spdk.Q9aFGJ 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122218061824 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356550144 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7138488320 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666906624 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678273024 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847697408 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871310848 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23613440 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:19.585 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=175104 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=328704 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677486592 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678277120 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=790528 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:19.586 * Looking for test storage... 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122218061824 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9353080832 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.586 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.847 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.847 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.847 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.847 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.847 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.848 --rc genhtml_branch_coverage=1 00:11:19.848 --rc genhtml_function_coverage=1 00:11:19.848 --rc genhtml_legend=1 00:11:19.848 --rc geninfo_all_blocks=1 00:11:19.848 --rc geninfo_unexecuted_blocks=1 00:11:19.848 00:11:19.848 ' 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.848 --rc genhtml_branch_coverage=1 00:11:19.848 --rc genhtml_function_coverage=1 00:11:19.848 --rc genhtml_legend=1 00:11:19.848 --rc geninfo_all_blocks=1 00:11:19.848 --rc geninfo_unexecuted_blocks=1 00:11:19.848 00:11:19.848 ' 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.848 --rc genhtml_branch_coverage=1 00:11:19.848 --rc genhtml_function_coverage=1 00:11:19.848 --rc genhtml_legend=1 00:11:19.848 --rc geninfo_all_blocks=1 00:11:19.848 --rc geninfo_unexecuted_blocks=1 00:11:19.848 00:11:19.848 ' 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.848 --rc genhtml_branch_coverage=1 00:11:19.848 --rc genhtml_function_coverage=1 00:11:19.848 --rc genhtml_legend=1 00:11:19.848 --rc geninfo_all_blocks=1 00:11:19.848 --rc geninfo_unexecuted_blocks=1 00:11:19.848 00:11:19.848 ' 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.848 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.849 12:45:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.987 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:27.988 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:27.988 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:27.988 Found net devices under 0000:31:00.0: cvl_0_0 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:27.988 Found net devices under 0000:31:00.1: cvl_0_1 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.988 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:11:27.989 00:11:27.989 --- 10.0.0.2 ping statistics --- 00:11:27.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.989 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:11:27.989 00:11:27.989 --- 10.0.0.1 ping statistics --- 00:11:27.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.989 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.989 ************************************ 00:11:27.989 START TEST nvmf_filesystem_no_in_capsule 00:11:27.989 ************************************ 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=503951 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 503951 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 503951 ']' 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.989 12:46:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.989 [2024-11-25 12:46:07.568724] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:11:27.989 [2024-11-25 12:46:07.568789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.989 [2024-11-25 12:46:07.662209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.989 [2024-11-25 12:46:07.703825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.989 [2024-11-25 12:46:07.703870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.989 [2024-11-25 12:46:07.703879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.989 [2024-11-25 12:46:07.703886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.989 [2024-11-25 12:46:07.703891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.989 [2024-11-25 12:46:07.705545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.989 [2024-11-25 12:46:07.705663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.989 [2024-11-25 12:46:07.705819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.989 [2024-11-25 12:46:07.705820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.559 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.559 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.560 [2024-11-25 12:46:08.424485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.560 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.821 Malloc1 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.821 [2024-11-25 12:46:08.560407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:28.821 { 00:11:28.821 "name": "Malloc1", 00:11:28.821 "aliases": [ 00:11:28.821 "36cb7db1-111a-425b-a6b1-7df7bcd61be6" 00:11:28.821 ], 00:11:28.821 "product_name": "Malloc disk", 00:11:28.821 "block_size": 512, 00:11:28.821 "num_blocks": 1048576, 00:11:28.821 "uuid": "36cb7db1-111a-425b-a6b1-7df7bcd61be6", 00:11:28.821 "assigned_rate_limits": { 00:11:28.821 "rw_ios_per_sec": 0, 00:11:28.821 "rw_mbytes_per_sec": 0, 00:11:28.821 "r_mbytes_per_sec": 0, 00:11:28.821 "w_mbytes_per_sec": 0 00:11:28.821 }, 00:11:28.821 "claimed": true, 00:11:28.821 "claim_type": "exclusive_write", 00:11:28.821 "zoned": false, 00:11:28.821 "supported_io_types": { 00:11:28.821 "read": true, 00:11:28.821 "write": true, 00:11:28.821 "unmap": true, 00:11:28.821 "flush": true, 00:11:28.821 "reset": true, 00:11:28.821 "nvme_admin": false, 00:11:28.821 "nvme_io": false, 00:11:28.821 "nvme_io_md": false, 00:11:28.821 "write_zeroes": true, 00:11:28.821 "zcopy": true, 00:11:28.821 "get_zone_info": false, 00:11:28.821 "zone_management": false, 00:11:28.821 "zone_append": false, 00:11:28.821 "compare": false, 00:11:28.821 "compare_and_write": false, 00:11:28.821 "abort": true, 00:11:28.821 "seek_hole": false, 00:11:28.821 "seek_data": false, 00:11:28.821 "copy": true, 00:11:28.821 "nvme_iov_md": false 00:11:28.821 }, 00:11:28.821 "memory_domains": [ 00:11:28.821 { 00:11:28.821 "dma_device_id": "system", 00:11:28.821 "dma_device_type": 1 00:11:28.821 }, 00:11:28.821 { 00:11:28.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.821 "dma_device_type": 2 00:11:28.821 } 00:11:28.821 ], 00:11:28.821 "driver_specific": {} 00:11:28.821 } 00:11:28.821 ]' 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:28.821 12:46:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.734 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.734 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:30.734 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.734 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:30.734 12:46:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:32.644 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:32.645 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:32.645 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:32.645 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:32.645 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:32.905 12:46:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.290 ************************************ 00:11:34.290 START TEST filesystem_ext4 00:11:34.290 ************************************ 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:34.290 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:34.291 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:34.291 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:34.291 12:46:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:34.291 mke2fs 1.47.0 (5-Feb-2023) 00:11:34.291 Discarding device blocks: 0/522240 done 00:11:34.291 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:34.291 Filesystem UUID: eae3d745-d3d0-4b8b-99f8-d508269b185f 00:11:34.291 Superblock backups stored on blocks: 00:11:34.291 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:34.291 00:11:34.291 Allocating group tables: 0/64 done 00:11:34.291 Writing inode tables: 0/64 done 00:11:36.836 Creating journal (8192 blocks): done 00:11:37.097 Writing superblocks and filesystem accounting information: 0/64 done 00:11:37.097 00:11:37.097 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:37.097 12:46:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 503951 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.682 00:11:43.682 real 0m8.730s 00:11:43.682 user 0m0.029s 00:11:43.682 sys 0m0.079s 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:43.682 ************************************ 00:11:43.682 END TEST filesystem_ext4 00:11:43.682 ************************************ 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.682 ************************************ 00:11:43.682 START TEST filesystem_btrfs 00:11:43.682 ************************************ 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:43.682 12:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:43.682 btrfs-progs v6.8.1 00:11:43.682 See https://btrfs.readthedocs.io for more information. 00:11:43.682 00:11:43.682 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:43.682 NOTE: several default settings have changed in version 5.15, please make sure 00:11:43.682 this does not affect your deployments: 00:11:43.682 - DUP for metadata (-m dup) 00:11:43.682 - enabled no-holes (-O no-holes) 00:11:43.682 - enabled free-space-tree (-R free-space-tree) 00:11:43.682 00:11:43.682 Label: (null) 00:11:43.682 UUID: b3a09152-bec5-4ad2-b8a8-96e06da967fb 00:11:43.682 Node size: 16384 00:11:43.682 Sector size: 4096 (CPU page size: 4096) 00:11:43.682 Filesystem size: 510.00MiB 00:11:43.682 Block group profiles: 00:11:43.682 Data: single 8.00MiB 00:11:43.682 Metadata: DUP 32.00MiB 00:11:43.682 System: DUP 8.00MiB 00:11:43.682 SSD detected: yes 00:11:43.682 Zoned device: no 00:11:43.682 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:43.682 Checksum: crc32c 00:11:43.682 Number of devices: 1 00:11:43.682 Devices: 00:11:43.682 ID SIZE PATH 00:11:43.682 1 510.00MiB /dev/nvme0n1p1 00:11:43.682 00:11:43.682 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:43.682 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 503951 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.254 00:11:44.254 real 0m1.331s 00:11:44.254 user 0m0.029s 00:11:44.254 sys 0m0.123s 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.254 12:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:44.254 ************************************ 00:11:44.254 END TEST filesystem_btrfs 00:11:44.254 ************************************ 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.254 ************************************ 00:11:44.254 START TEST filesystem_xfs 00:11:44.254 ************************************ 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:44.254 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:44.254 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:44.254 = sectsz=512 attr=2, projid32bit=1 00:11:44.254 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:44.254 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:44.254 data = bsize=4096 blocks=130560, imaxpct=25 00:11:44.254 = sunit=0 swidth=0 blks 00:11:44.255 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:44.255 log =internal log bsize=4096 blocks=16384, version=2 00:11:44.255 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:44.255 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:45.200 Discarding blocks...Done. 00:11:45.200 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:45.200 12:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 503951 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.745 00:11:47.745 real 0m3.282s 00:11:47.745 user 0m0.031s 00:11:47.745 sys 0m0.076s 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.745 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:47.746 ************************************ 00:11:47.746 END TEST filesystem_xfs 00:11:47.746 ************************************ 00:11:47.746 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:47.746 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:48.006 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.006 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.006 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:48.006 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:48.006 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.006 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:48.006 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 503951 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 503951 ']' 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 503951 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 503951 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 503951' 00:11:48.267 killing process with pid 503951 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 503951 00:11:48.267 12:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 503951 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:48.529 00:11:48.529 real 0m20.714s 00:11:48.529 user 1m21.876s 00:11:48.529 sys 0m1.466s 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.529 ************************************ 00:11:48.529 END TEST nvmf_filesystem_no_in_capsule 00:11:48.529 ************************************ 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.529 ************************************ 00:11:48.529 START TEST nvmf_filesystem_in_capsule 00:11:48.529 ************************************ 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=508183 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 508183 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 508183 ']' 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.529 12:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.529 [2024-11-25 12:46:28.361571] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:11:48.529 [2024-11-25 12:46:28.361626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.790 [2024-11-25 12:46:28.448721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.790 [2024-11-25 12:46:28.488621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.791 [2024-11-25 12:46:28.488656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.791 [2024-11-25 12:46:28.488664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.791 [2024-11-25 12:46:28.488671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.791 [2024-11-25 12:46:28.488677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.791 [2024-11-25 12:46:28.490265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.791 [2024-11-25 12:46:28.490380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.791 [2024-11-25 12:46:28.490535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.791 [2024-11-25 12:46:28.490536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.362 [2024-11-25 12:46:29.212637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:49.362 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.363 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.624 Malloc1 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.624 [2024-11-25 12:46:29.350361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:49.624 { 00:11:49.624 "name": "Malloc1", 00:11:49.624 "aliases": [ 00:11:49.624 "b4eee940-b9af-4926-b691-c9f181860d0f" 00:11:49.624 ], 00:11:49.624 "product_name": "Malloc disk", 00:11:49.624 "block_size": 512, 00:11:49.624 "num_blocks": 1048576, 00:11:49.624 "uuid": "b4eee940-b9af-4926-b691-c9f181860d0f", 00:11:49.624 "assigned_rate_limits": { 00:11:49.624 "rw_ios_per_sec": 0, 00:11:49.624 "rw_mbytes_per_sec": 0, 00:11:49.624 "r_mbytes_per_sec": 0, 00:11:49.624 "w_mbytes_per_sec": 0 00:11:49.624 }, 00:11:49.624 "claimed": true, 00:11:49.624 "claim_type": "exclusive_write", 00:11:49.624 "zoned": false, 00:11:49.624 "supported_io_types": { 00:11:49.624 "read": true, 00:11:49.624 "write": true, 00:11:49.624 "unmap": true, 00:11:49.624 "flush": true, 00:11:49.624 "reset": true, 00:11:49.624 "nvme_admin": false, 00:11:49.624 "nvme_io": false, 00:11:49.624 "nvme_io_md": false, 00:11:49.624 "write_zeroes": true, 00:11:49.624 "zcopy": true, 00:11:49.624 "get_zone_info": false, 00:11:49.624 "zone_management": false, 00:11:49.624 "zone_append": false, 00:11:49.624 "compare": false, 00:11:49.624 "compare_and_write": false, 00:11:49.624 "abort": true, 00:11:49.624 "seek_hole": false, 00:11:49.624 "seek_data": false, 00:11:49.624 "copy": true, 00:11:49.624 "nvme_iov_md": false 00:11:49.624 }, 00:11:49.624 "memory_domains": [ 00:11:49.624 { 00:11:49.624 "dma_device_id": "system", 00:11:49.624 "dma_device_type": 1 00:11:49.624 }, 00:11:49.624 { 00:11:49.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.624 "dma_device_type": 2 00:11:49.624 } 00:11:49.624 ], 00:11:49.624 "driver_specific": {} 00:11:49.624 } 00:11:49.624 ]' 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:49.624 12:46:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.539 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.539 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:51.539 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.539 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:51.539 12:46:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:53.451 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:53.451 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:53.451 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.451 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:53.451 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.451 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:53.451 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:53.451 12:46:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:53.451 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:53.451 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:53.451 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:53.451 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:53.451 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:53.451 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:53.451 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:53.451 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:53.451 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:53.712 12:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:54.657 12:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.597 ************************************ 00:11:55.597 START TEST filesystem_in_capsule_ext4 00:11:55.597 ************************************ 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:55.597 12:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:55.597 mke2fs 1.47.0 (5-Feb-2023) 00:11:55.597 Discarding device blocks: 0/522240 done 00:11:55.597 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:55.597 Filesystem UUID: a4b60897-d275-4da8-b11d-8728e1feaafc 00:11:55.597 Superblock backups stored on blocks: 00:11:55.597 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:55.597 00:11:55.597 Allocating group tables: 0/64 done 00:11:55.597 Writing inode tables: 0/64 done 00:11:56.984 Creating journal (8192 blocks): done 00:11:58.939 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:58.939 00:11:58.939 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:58.939 12:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:04.222 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 508183 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:04.484 00:12:04.484 real 0m8.851s 00:12:04.484 user 0m0.024s 00:12:04.484 sys 0m0.083s 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:04.484 ************************************ 00:12:04.484 END TEST filesystem_in_capsule_ext4 00:12:04.484 ************************************ 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.484 ************************************ 00:12:04.484 START TEST filesystem_in_capsule_btrfs 00:12:04.484 ************************************ 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:04.484 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:04.485 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:04.485 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:04.485 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:04.485 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:04.485 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:04.832 btrfs-progs v6.8.1 00:12:04.832 See https://btrfs.readthedocs.io for more information. 00:12:04.832 00:12:04.832 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:04.832 NOTE: several default settings have changed in version 5.15, please make sure 00:12:04.832 this does not affect your deployments: 00:12:04.832 - DUP for metadata (-m dup) 00:12:04.832 - enabled no-holes (-O no-holes) 00:12:04.832 - enabled free-space-tree (-R free-space-tree) 00:12:04.832 00:12:04.832 Label: (null) 00:12:04.832 UUID: 093e9787-c91f-4db8-bb54-740fb9ca60d5 00:12:04.832 Node size: 16384 00:12:04.832 Sector size: 4096 (CPU page size: 4096) 00:12:04.832 Filesystem size: 510.00MiB 00:12:04.832 Block group profiles: 00:12:04.832 Data: single 8.00MiB 00:12:04.832 Metadata: DUP 32.00MiB 00:12:04.832 System: DUP 8.00MiB 00:12:04.832 SSD detected: yes 00:12:04.832 Zoned device: no 00:12:04.832 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:04.832 Checksum: crc32c 00:12:04.832 Number of devices: 1 00:12:04.832 Devices: 00:12:04.832 ID SIZE PATH 00:12:04.832 1 510.00MiB /dev/nvme0n1p1 00:12:04.832 00:12:04.832 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:04.832 12:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.478 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.478 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:05.478 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.478 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:05.478 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:05.478 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 508183 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.479 00:12:05.479 real 0m0.984s 00:12:05.479 user 0m0.026s 00:12:05.479 sys 0m0.124s 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:05.479 ************************************ 00:12:05.479 END TEST filesystem_in_capsule_btrfs 00:12:05.479 ************************************ 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.479 ************************************ 00:12:05.479 START TEST filesystem_in_capsule_xfs 00:12:05.479 ************************************ 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:05.479 12:46:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:05.762 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:05.762 = sectsz=512 attr=2, projid32bit=1 00:12:05.762 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:05.762 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:05.762 data = bsize=4096 blocks=130560, imaxpct=25 00:12:05.762 = sunit=0 swidth=0 blks 00:12:05.762 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:05.762 log =internal log bsize=4096 blocks=16384, version=2 00:12:05.762 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:05.762 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:06.336 Discarding blocks...Done. 00:12:06.336 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:06.336 12:46:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:08.882 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:08.882 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:08.882 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:08.882 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:08.882 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:08.882 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.143 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 508183 00:12:09.143 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.143 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.143 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.143 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.143 00:12:09.143 real 0m3.472s 00:12:09.143 user 0m0.031s 00:12:09.143 sys 0m0.076s 00:12:09.143 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.143 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:09.143 ************************************ 00:12:09.143 END TEST filesystem_in_capsule_xfs 00:12:09.143 ************************************ 00:12:09.143 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:09.143 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:09.143 12:46:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.144 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.404 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:09.404 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:09.404 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.404 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:09.404 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.404 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:09.404 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 508183 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 508183 ']' 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 508183 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 508183 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 508183' 00:12:09.405 killing process with pid 508183 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 508183 00:12:09.405 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 508183 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:09.667 00:12:09.667 real 0m21.078s 00:12:09.667 user 1m23.378s 00:12:09.667 sys 0m1.456s 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.667 ************************************ 00:12:09.667 END TEST nvmf_filesystem_in_capsule 00:12:09.667 ************************************ 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.667 rmmod nvme_tcp 00:12:09.667 rmmod nvme_fabrics 00:12:09.667 rmmod nvme_keyring 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.667 12:46:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.214 00:12:12.214 real 0m52.439s 00:12:12.214 user 2m47.605s 00:12:12.214 sys 0m9.124s 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.214 ************************************ 00:12:12.214 END TEST nvmf_filesystem 00:12:12.214 ************************************ 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.214 ************************************ 00:12:12.214 START TEST nvmf_target_discovery 00:12:12.214 ************************************ 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:12.214 * Looking for test storage... 00:12:12.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.214 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:12.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.214 --rc genhtml_branch_coverage=1 00:12:12.215 --rc genhtml_function_coverage=1 00:12:12.215 --rc genhtml_legend=1 00:12:12.215 --rc geninfo_all_blocks=1 00:12:12.215 --rc geninfo_unexecuted_blocks=1 00:12:12.215 00:12:12.215 ' 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.215 --rc genhtml_branch_coverage=1 00:12:12.215 --rc genhtml_function_coverage=1 00:12:12.215 --rc genhtml_legend=1 00:12:12.215 --rc geninfo_all_blocks=1 00:12:12.215 --rc geninfo_unexecuted_blocks=1 00:12:12.215 00:12:12.215 ' 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.215 --rc genhtml_branch_coverage=1 00:12:12.215 --rc genhtml_function_coverage=1 00:12:12.215 --rc genhtml_legend=1 00:12:12.215 --rc geninfo_all_blocks=1 00:12:12.215 --rc geninfo_unexecuted_blocks=1 00:12:12.215 00:12:12.215 ' 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:12.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.215 --rc genhtml_branch_coverage=1 00:12:12.215 --rc genhtml_function_coverage=1 00:12:12.215 --rc genhtml_legend=1 00:12:12.215 --rc geninfo_all_blocks=1 00:12:12.215 --rc geninfo_unexecuted_blocks=1 00:12:12.215 00:12:12.215 ' 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.215 12:46:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.357 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:20.357 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:20.358 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:20.358 Found net devices under 0000:31:00.0: cvl_0_0 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:20.358 Found net devices under 0000:31:00.1: cvl_0_1 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.358 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:12:20.620 00:12:20.620 --- 10.0.0.2 ping statistics --- 00:12:20.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.620 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:12:20.620 00:12:20.620 --- 10.0.0.1 ping statistics --- 00:12:20.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.620 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:20.620 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=517412 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 517412 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 517412 ']' 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.882 12:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.882 [2024-11-25 12:47:00.599595] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:12:20.882 [2024-11-25 12:47:00.599666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.882 [2024-11-25 12:47:00.695116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.882 [2024-11-25 12:47:00.736753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.882 [2024-11-25 12:47:00.736787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.882 [2024-11-25 12:47:00.736795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.882 [2024-11-25 12:47:00.736803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.882 [2024-11-25 12:47:00.736809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.882 [2024-11-25 12:47:00.738317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.882 [2024-11-25 12:47:00.738412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.882 [2024-11-25 12:47:00.738570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.882 [2024-11-25 12:47:00.738570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.825 [2024-11-25 12:47:01.453216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.825 Null1 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.825 [2024-11-25 12:47:01.513521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.825 Null2 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.825 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.826 Null3 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.826 Null4 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.826 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:22.087 00:12:22.087 Discovery Log Number of Records 6, Generation counter 6 00:12:22.087 =====Discovery Log Entry 0====== 00:12:22.087 trtype: tcp 00:12:22.087 adrfam: ipv4 00:12:22.087 subtype: current discovery subsystem 00:12:22.087 treq: not required 00:12:22.087 portid: 0 00:12:22.087 trsvcid: 4420 00:12:22.087 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.087 traddr: 10.0.0.2 00:12:22.087 eflags: explicit discovery connections, duplicate discovery information 00:12:22.087 sectype: none 00:12:22.087 =====Discovery Log Entry 1====== 00:12:22.087 trtype: tcp 00:12:22.088 adrfam: ipv4 00:12:22.088 subtype: nvme subsystem 00:12:22.088 treq: not required 00:12:22.088 portid: 0 00:12:22.088 trsvcid: 4420 00:12:22.088 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:22.088 traddr: 10.0.0.2 00:12:22.088 eflags: none 00:12:22.088 sectype: none 00:12:22.088 =====Discovery Log Entry 2====== 00:12:22.088 trtype: tcp 00:12:22.088 adrfam: ipv4 00:12:22.088 subtype: nvme subsystem 00:12:22.088 treq: not required 00:12:22.088 portid: 0 00:12:22.088 trsvcid: 4420 00:12:22.088 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:22.088 traddr: 10.0.0.2 00:12:22.088 eflags: none 00:12:22.088 sectype: none 00:12:22.088 =====Discovery Log Entry 3====== 00:12:22.088 trtype: tcp 00:12:22.088 adrfam: ipv4 00:12:22.088 subtype: nvme subsystem 00:12:22.088 treq: not required 00:12:22.088 portid: 0 00:12:22.088 trsvcid: 4420 00:12:22.088 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:22.088 traddr: 10.0.0.2 00:12:22.088 eflags: none 00:12:22.088 sectype: none 00:12:22.088 =====Discovery Log Entry 4====== 00:12:22.088 trtype: tcp 00:12:22.088 adrfam: ipv4 00:12:22.088 subtype: nvme subsystem 00:12:22.088 treq: not required 00:12:22.088 portid: 0 00:12:22.088 trsvcid: 4420 00:12:22.088 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:22.088 traddr: 10.0.0.2 00:12:22.088 eflags: none 00:12:22.088 sectype: none 00:12:22.088 =====Discovery Log Entry 5====== 00:12:22.088 trtype: tcp 00:12:22.088 adrfam: ipv4 00:12:22.088 subtype: discovery subsystem referral 00:12:22.088 treq: not required 00:12:22.088 portid: 0 00:12:22.088 trsvcid: 4430 00:12:22.088 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.088 traddr: 10.0.0.2 00:12:22.088 eflags: none 00:12:22.088 sectype: none 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:22.088 Perform nvmf subsystem discovery via RPC 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.088 [ 00:12:22.088 { 00:12:22.088 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:22.088 "subtype": "Discovery", 00:12:22.088 "listen_addresses": [ 00:12:22.088 { 00:12:22.088 "trtype": "TCP", 00:12:22.088 "adrfam": "IPv4", 00:12:22.088 "traddr": "10.0.0.2", 00:12:22.088 "trsvcid": "4420" 00:12:22.088 } 00:12:22.088 ], 00:12:22.088 "allow_any_host": true, 00:12:22.088 "hosts": [] 00:12:22.088 }, 00:12:22.088 { 00:12:22.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.088 "subtype": "NVMe", 00:12:22.088 "listen_addresses": [ 00:12:22.088 { 00:12:22.088 "trtype": "TCP", 00:12:22.088 "adrfam": "IPv4", 00:12:22.088 "traddr": "10.0.0.2", 00:12:22.088 "trsvcid": "4420" 00:12:22.088 } 00:12:22.088 ], 00:12:22.088 "allow_any_host": true, 00:12:22.088 "hosts": [], 00:12:22.088 "serial_number": "SPDK00000000000001", 00:12:22.088 "model_number": "SPDK bdev Controller", 00:12:22.088 "max_namespaces": 32, 00:12:22.088 "min_cntlid": 1, 00:12:22.088 "max_cntlid": 65519, 00:12:22.088 "namespaces": [ 00:12:22.088 { 00:12:22.088 "nsid": 1, 00:12:22.088 "bdev_name": "Null1", 00:12:22.088 "name": "Null1", 00:12:22.088 "nguid": "1F1FFF6E3717457CA8B559498C3025CC", 00:12:22.088 "uuid": "1f1fff6e-3717-457c-a8b5-59498c3025cc" 00:12:22.088 } 00:12:22.088 ] 00:12:22.088 }, 00:12:22.088 { 00:12:22.088 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:22.088 "subtype": "NVMe", 00:12:22.088 "listen_addresses": [ 00:12:22.088 { 00:12:22.088 "trtype": "TCP", 00:12:22.088 "adrfam": "IPv4", 00:12:22.088 "traddr": "10.0.0.2", 00:12:22.088 "trsvcid": "4420" 00:12:22.088 } 00:12:22.088 ], 00:12:22.088 "allow_any_host": true, 00:12:22.088 "hosts": [], 00:12:22.088 "serial_number": "SPDK00000000000002", 00:12:22.088 "model_number": "SPDK bdev Controller", 00:12:22.088 "max_namespaces": 32, 00:12:22.088 "min_cntlid": 1, 00:12:22.088 "max_cntlid": 65519, 00:12:22.088 "namespaces": [ 00:12:22.088 { 00:12:22.088 "nsid": 1, 00:12:22.088 "bdev_name": "Null2", 00:12:22.088 "name": "Null2", 00:12:22.088 "nguid": "14F2B2A58F0D433ABAE3D135DE5C9768", 00:12:22.088 "uuid": "14f2b2a5-8f0d-433a-bae3-d135de5c9768" 00:12:22.088 } 00:12:22.088 ] 00:12:22.088 }, 00:12:22.088 { 00:12:22.088 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:22.088 "subtype": "NVMe", 00:12:22.088 "listen_addresses": [ 00:12:22.088 { 00:12:22.088 "trtype": "TCP", 00:12:22.088 "adrfam": "IPv4", 00:12:22.088 "traddr": "10.0.0.2", 00:12:22.088 "trsvcid": "4420" 00:12:22.088 } 00:12:22.088 ], 00:12:22.088 "allow_any_host": true, 00:12:22.088 "hosts": [], 00:12:22.088 "serial_number": "SPDK00000000000003", 00:12:22.088 "model_number": "SPDK bdev Controller", 00:12:22.088 "max_namespaces": 32, 00:12:22.088 "min_cntlid": 1, 00:12:22.088 "max_cntlid": 65519, 00:12:22.088 "namespaces": [ 00:12:22.088 { 00:12:22.088 "nsid": 1, 00:12:22.088 "bdev_name": "Null3", 00:12:22.088 "name": "Null3", 00:12:22.088 "nguid": "D4AAD5B6F40E40D8B32A5BF60A5A1A93", 00:12:22.088 "uuid": "d4aad5b6-f40e-40d8-b32a-5bf60a5a1a93" 00:12:22.088 } 00:12:22.088 ] 00:12:22.088 }, 00:12:22.088 { 00:12:22.088 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:22.088 "subtype": "NVMe", 00:12:22.088 "listen_addresses": [ 00:12:22.088 { 00:12:22.088 "trtype": "TCP", 00:12:22.088 "adrfam": "IPv4", 00:12:22.088 "traddr": "10.0.0.2", 00:12:22.088 "trsvcid": "4420" 00:12:22.088 } 00:12:22.088 ], 00:12:22.088 "allow_any_host": true, 00:12:22.088 "hosts": [], 00:12:22.088 "serial_number": "SPDK00000000000004", 00:12:22.088 "model_number": "SPDK bdev Controller", 00:12:22.088 "max_namespaces": 32, 00:12:22.088 "min_cntlid": 1, 00:12:22.088 "max_cntlid": 65519, 00:12:22.088 "namespaces": [ 00:12:22.088 { 00:12:22.088 "nsid": 1, 00:12:22.088 "bdev_name": "Null4", 00:12:22.088 "name": "Null4", 00:12:22.088 "nguid": "A1CC29D37A344905BFCB2C14FD71B9B8", 00:12:22.088 "uuid": "a1cc29d3-7a34-4905-bfcb-2c14fd71b9b8" 00:12:22.088 } 00:12:22.088 ] 00:12:22.088 } 00:12:22.088 ] 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:22.088 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.089 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.089 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.089 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:22.089 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.089 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.350 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.350 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:22.350 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.350 12:47:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.350 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.351 rmmod nvme_tcp 00:12:22.351 rmmod nvme_fabrics 00:12:22.351 rmmod nvme_keyring 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 517412 ']' 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 517412 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 517412 ']' 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 517412 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 517412 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 517412' 00:12:22.351 killing process with pid 517412 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 517412 00:12:22.351 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 517412 00:12:22.612 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.612 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.613 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.613 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:22.613 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:22.613 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.613 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.613 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.613 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.613 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.613 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.613 12:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.523 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.523 00:12:24.523 real 0m12.713s 00:12:24.523 user 0m8.897s 00:12:24.523 sys 0m6.946s 00:12:24.523 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.523 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.523 ************************************ 00:12:24.523 END TEST nvmf_target_discovery 00:12:24.523 ************************************ 00:12:24.523 12:47:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:24.523 12:47:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:24.523 12:47:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.523 12:47:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.785 ************************************ 00:12:24.785 START TEST nvmf_referrals 00:12:24.785 ************************************ 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:24.785 * Looking for test storage... 00:12:24.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.785 --rc genhtml_branch_coverage=1 00:12:24.785 --rc genhtml_function_coverage=1 00:12:24.785 --rc genhtml_legend=1 00:12:24.785 --rc geninfo_all_blocks=1 00:12:24.785 --rc geninfo_unexecuted_blocks=1 00:12:24.785 00:12:24.785 ' 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.785 --rc genhtml_branch_coverage=1 00:12:24.785 --rc genhtml_function_coverage=1 00:12:24.785 --rc genhtml_legend=1 00:12:24.785 --rc geninfo_all_blocks=1 00:12:24.785 --rc geninfo_unexecuted_blocks=1 00:12:24.785 00:12:24.785 ' 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.785 --rc genhtml_branch_coverage=1 00:12:24.785 --rc genhtml_function_coverage=1 00:12:24.785 --rc genhtml_legend=1 00:12:24.785 --rc geninfo_all_blocks=1 00:12:24.785 --rc geninfo_unexecuted_blocks=1 00:12:24.785 00:12:24.785 ' 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:24.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.785 --rc genhtml_branch_coverage=1 00:12:24.785 --rc genhtml_function_coverage=1 00:12:24.785 --rc genhtml_legend=1 00:12:24.785 --rc geninfo_all_blocks=1 00:12:24.785 --rc geninfo_unexecuted_blocks=1 00:12:24.785 00:12:24.785 ' 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.785 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.046 12:47:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:33.202 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:33.202 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.202 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:33.203 Found net devices under 0000:31:00.0: cvl_0_0 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:33.203 Found net devices under 0000:31:00.1: cvl_0_1 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.203 12:47:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.203 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.203 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.203 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:12:33.464 00:12:33.464 --- 10.0.0.2 ping statistics --- 00:12:33.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.464 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:12:33.464 00:12:33.464 --- 10.0.0.1 ping statistics --- 00:12:33.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.464 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=522465 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 522465 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 522465 ']' 00:12:33.464 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.465 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.465 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.465 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.465 12:47:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.465 [2024-11-25 12:47:13.226980] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:12:33.465 [2024-11-25 12:47:13.227048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.465 [2024-11-25 12:47:13.318343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.465 [2024-11-25 12:47:13.359388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.465 [2024-11-25 12:47:13.359421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.465 [2024-11-25 12:47:13.359429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.465 [2024-11-25 12:47:13.359437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.465 [2024-11-25 12:47:13.359443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.465 [2024-11-25 12:47:13.361104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.465 [2024-11-25 12:47:13.361246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.465 [2024-11-25 12:47:13.361402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.465 [2024-11-25 12:47:13.361403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.406 [2024-11-25 12:47:14.087883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.406 [2024-11-25 12:47:14.104087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:34.406 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.407 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.668 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.928 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.929 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:35.189 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:35.189 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:35.189 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:35.189 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:35.189 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:35.189 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.189 12:47:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.449 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.711 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:35.972 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:35.972 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:35.972 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:35.972 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:35.972 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.972 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:36.233 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:36.233 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:36.233 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.233 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.233 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.233 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:36.233 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:36.233 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.233 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:36.234 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.234 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:36.234 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:36.234 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:36.234 12:47:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:36.234 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:36.234 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:36.234 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:36.494 rmmod nvme_tcp 00:12:36.494 rmmod nvme_fabrics 00:12:36.494 rmmod nvme_keyring 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 522465 ']' 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 522465 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 522465 ']' 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 522465 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 522465 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 522465' 00:12:36.494 killing process with pid 522465 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 522465 00:12:36.494 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 522465 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.755 12:47:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.668 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:38.668 00:12:38.668 real 0m14.101s 00:12:38.668 user 0m15.883s 00:12:38.668 sys 0m7.183s 00:12:38.668 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.668 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:38.668 ************************************ 00:12:38.668 END TEST nvmf_referrals 00:12:38.668 ************************************ 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:38.929 ************************************ 00:12:38.929 START TEST nvmf_connect_disconnect 00:12:38.929 ************************************ 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:38.929 * Looking for test storage... 00:12:38.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.929 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:39.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.190 --rc genhtml_branch_coverage=1 00:12:39.190 --rc genhtml_function_coverage=1 00:12:39.190 --rc genhtml_legend=1 00:12:39.190 --rc geninfo_all_blocks=1 00:12:39.190 --rc geninfo_unexecuted_blocks=1 00:12:39.190 00:12:39.190 ' 00:12:39.190 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:39.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.190 --rc genhtml_branch_coverage=1 00:12:39.190 --rc genhtml_function_coverage=1 00:12:39.190 --rc genhtml_legend=1 00:12:39.190 --rc geninfo_all_blocks=1 00:12:39.190 --rc geninfo_unexecuted_blocks=1 00:12:39.191 00:12:39.191 ' 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:39.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.191 --rc genhtml_branch_coverage=1 00:12:39.191 --rc genhtml_function_coverage=1 00:12:39.191 --rc genhtml_legend=1 00:12:39.191 --rc geninfo_all_blocks=1 00:12:39.191 --rc geninfo_unexecuted_blocks=1 00:12:39.191 00:12:39.191 ' 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:39.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.191 --rc genhtml_branch_coverage=1 00:12:39.191 --rc genhtml_function_coverage=1 00:12:39.191 --rc genhtml_legend=1 00:12:39.191 --rc geninfo_all_blocks=1 00:12:39.191 --rc geninfo_unexecuted_blocks=1 00:12:39.191 00:12:39.191 ' 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.191 12:47:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:47.326 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:47.326 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:47.326 Found net devices under 0000:31:00.0: cvl_0_0 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:47.326 Found net devices under 0000:31:00.1: cvl_0_1 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.326 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.327 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.327 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.327 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.327 12:47:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:47.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:12:47.327 00:12:47.327 --- 10.0.0.2 ping statistics --- 00:12:47.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.327 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:12:47.327 00:12:47.327 --- 10.0.0.1 ping statistics --- 00:12:47.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.327 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=527916 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 527916 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 527916 ']' 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:47.327 12:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:47.587 [2024-11-25 12:47:27.241481] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:12:47.587 [2024-11-25 12:47:27.241535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.587 [2024-11-25 12:47:27.327269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.587 [2024-11-25 12:47:27.364378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.587 [2024-11-25 12:47:27.364409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.587 [2024-11-25 12:47:27.364418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.587 [2024-11-25 12:47:27.364424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.587 [2024-11-25 12:47:27.364430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.587 [2024-11-25 12:47:27.365901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.587 [2024-11-25 12:47:27.366078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.587 [2024-11-25 12:47:27.366237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.587 [2024-11-25 12:47:27.366238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.159 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.159 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:48.159 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:48.159 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:48.159 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:48.419 [2024-11-25 12:47:28.087839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:48.419 [2024-11-25 12:47:28.155201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:48.419 12:47:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:52.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.709 rmmod nvme_tcp 00:13:06.709 rmmod nvme_fabrics 00:13:06.709 rmmod nvme_keyring 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 527916 ']' 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 527916 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 527916 ']' 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 527916 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527916 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527916' 00:13:06.709 killing process with pid 527916 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 527916 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 527916 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.709 12:47:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.255 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:09.255 00:13:09.255 real 0m30.026s 00:13:09.255 user 1m18.771s 00:13:09.255 sys 0m7.813s 00:13:09.255 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.255 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:09.255 ************************************ 00:13:09.255 END TEST nvmf_connect_disconnect 00:13:09.255 ************************************ 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.256 ************************************ 00:13:09.256 START TEST nvmf_multitarget 00:13:09.256 ************************************ 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:09.256 * Looking for test storage... 00:13:09.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:09.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.256 --rc genhtml_branch_coverage=1 00:13:09.256 --rc genhtml_function_coverage=1 00:13:09.256 --rc genhtml_legend=1 00:13:09.256 --rc geninfo_all_blocks=1 00:13:09.256 --rc geninfo_unexecuted_blocks=1 00:13:09.256 00:13:09.256 ' 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:09.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.256 --rc genhtml_branch_coverage=1 00:13:09.256 --rc genhtml_function_coverage=1 00:13:09.256 --rc genhtml_legend=1 00:13:09.256 --rc geninfo_all_blocks=1 00:13:09.256 --rc geninfo_unexecuted_blocks=1 00:13:09.256 00:13:09.256 ' 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:09.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.256 --rc genhtml_branch_coverage=1 00:13:09.256 --rc genhtml_function_coverage=1 00:13:09.256 --rc genhtml_legend=1 00:13:09.256 --rc geninfo_all_blocks=1 00:13:09.256 --rc geninfo_unexecuted_blocks=1 00:13:09.256 00:13:09.256 ' 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:09.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.256 --rc genhtml_branch_coverage=1 00:13:09.256 --rc genhtml_function_coverage=1 00:13:09.256 --rc genhtml_legend=1 00:13:09.256 --rc geninfo_all_blocks=1 00:13:09.256 --rc geninfo_unexecuted_blocks=1 00:13:09.256 00:13:09.256 ' 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.256 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:09.257 12:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:17.586 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:17.586 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:17.586 Found net devices under 0000:31:00.0: cvl_0_0 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.586 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:17.587 Found net devices under 0000:31:00.1: cvl_0_1 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:17.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:13:17.587 00:13:17.587 --- 10.0.0.2 ping statistics --- 00:13:17.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.587 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:13:17.587 00:13:17.587 --- 10.0.0.1 ping statistics --- 00:13:17.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.587 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:17.587 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=536406 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 536406 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 536406 ']' 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.849 12:47:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:17.849 [2024-11-25 12:47:57.573101] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:13:17.849 [2024-11-25 12:47:57.573166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.849 [2024-11-25 12:47:57.664167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.849 [2024-11-25 12:47:57.705508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.849 [2024-11-25 12:47:57.705545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.849 [2024-11-25 12:47:57.705553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.849 [2024-11-25 12:47:57.705560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.849 [2024-11-25 12:47:57.705565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.849 [2024-11-25 12:47:57.707135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.849 [2024-11-25 12:47:57.707252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.849 [2024-11-25 12:47:57.707407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.849 [2024-11-25 12:47:57.707407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:18.790 "nvmf_tgt_1" 00:13:18.790 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:19.051 "nvmf_tgt_2" 00:13:19.051 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:19.051 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:19.051 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:19.051 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:19.051 true 00:13:19.313 12:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:19.313 true 00:13:19.313 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:19.313 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:19.313 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:19.313 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:19.313 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:19.313 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:19.313 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:19.313 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:19.313 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:19.313 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:19.313 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:19.313 rmmod nvme_tcp 00:13:19.313 rmmod nvme_fabrics 00:13:19.313 rmmod nvme_keyring 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 536406 ']' 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 536406 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 536406 ']' 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 536406 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 536406 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 536406' 00:13:19.574 killing process with pid 536406 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 536406 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 536406 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.574 12:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:22.124 00:13:22.124 real 0m12.744s 00:13:22.124 user 0m10.102s 00:13:22.124 sys 0m6.843s 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:22.124 ************************************ 00:13:22.124 END TEST nvmf_multitarget 00:13:22.124 ************************************ 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.124 ************************************ 00:13:22.124 START TEST nvmf_rpc 00:13:22.124 ************************************ 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:22.124 * Looking for test storage... 00:13:22.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:22.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.124 --rc genhtml_branch_coverage=1 00:13:22.124 --rc genhtml_function_coverage=1 00:13:22.124 --rc genhtml_legend=1 00:13:22.124 --rc geninfo_all_blocks=1 00:13:22.124 --rc geninfo_unexecuted_blocks=1 00:13:22.124 00:13:22.124 ' 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:22.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.124 --rc genhtml_branch_coverage=1 00:13:22.124 --rc genhtml_function_coverage=1 00:13:22.124 --rc genhtml_legend=1 00:13:22.124 --rc geninfo_all_blocks=1 00:13:22.124 --rc geninfo_unexecuted_blocks=1 00:13:22.124 00:13:22.124 ' 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:22.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.124 --rc genhtml_branch_coverage=1 00:13:22.124 --rc genhtml_function_coverage=1 00:13:22.124 --rc genhtml_legend=1 00:13:22.124 --rc geninfo_all_blocks=1 00:13:22.124 --rc geninfo_unexecuted_blocks=1 00:13:22.124 00:13:22.124 ' 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:22.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.124 --rc genhtml_branch_coverage=1 00:13:22.124 --rc genhtml_function_coverage=1 00:13:22.124 --rc genhtml_legend=1 00:13:22.124 --rc geninfo_all_blocks=1 00:13:22.124 --rc geninfo_unexecuted_blocks=1 00:13:22.124 00:13:22.124 ' 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.124 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:22.125 12:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:30.266 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:30.266 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:30.266 Found net devices under 0000:31:00.0: cvl_0_0 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:30.266 Found net devices under 0000:31:00.1: cvl_0_1 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.266 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.528 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.528 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.528 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:30.528 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.528 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.528 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.528 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.529 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:13:30.529 00:13:30.529 --- 10.0.0.2 ping statistics --- 00:13:30.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.529 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:13:30.529 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:13:30.529 00:13:30.529 --- 10.0.0.1 ping statistics --- 00:13:30.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.529 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:13:30.529 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.529 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:30.529 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:30.529 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.529 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:30.529 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:30.529 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.529 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:30.529 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:30.789 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:30.789 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.789 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.789 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.789 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=542142 00:13:30.789 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 542142 00:13:30.789 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:30.789 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 542142 ']' 00:13:30.790 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.790 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.790 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.790 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.790 12:48:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.790 [2024-11-25 12:48:10.524178] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:13:30.790 [2024-11-25 12:48:10.524241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.790 [2024-11-25 12:48:10.625136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.790 [2024-11-25 12:48:10.667585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.790 [2024-11-25 12:48:10.667625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.790 [2024-11-25 12:48:10.667633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.790 [2024-11-25 12:48:10.667640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.790 [2024-11-25 12:48:10.667645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.790 [2024-11-25 12:48:10.669272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.790 [2024-11-25 12:48:10.669398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.790 [2024-11-25 12:48:10.669555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.790 [2024-11-25 12:48:10.669556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:31.733 "tick_rate": 2400000000, 00:13:31.733 "poll_groups": [ 00:13:31.733 { 00:13:31.733 "name": "nvmf_tgt_poll_group_000", 00:13:31.733 "admin_qpairs": 0, 00:13:31.733 "io_qpairs": 0, 00:13:31.733 "current_admin_qpairs": 0, 00:13:31.733 "current_io_qpairs": 0, 00:13:31.733 "pending_bdev_io": 0, 00:13:31.733 "completed_nvme_io": 0, 00:13:31.733 "transports": [] 00:13:31.733 }, 00:13:31.733 { 00:13:31.733 "name": "nvmf_tgt_poll_group_001", 00:13:31.733 "admin_qpairs": 0, 00:13:31.733 "io_qpairs": 0, 00:13:31.733 "current_admin_qpairs": 0, 00:13:31.733 "current_io_qpairs": 0, 00:13:31.733 "pending_bdev_io": 0, 00:13:31.733 "completed_nvme_io": 0, 00:13:31.733 "transports": [] 00:13:31.733 }, 00:13:31.733 { 00:13:31.733 "name": "nvmf_tgt_poll_group_002", 00:13:31.733 "admin_qpairs": 0, 00:13:31.733 "io_qpairs": 0, 00:13:31.733 "current_admin_qpairs": 0, 00:13:31.733 "current_io_qpairs": 0, 00:13:31.733 "pending_bdev_io": 0, 00:13:31.733 "completed_nvme_io": 0, 00:13:31.733 "transports": [] 00:13:31.733 }, 00:13:31.733 { 00:13:31.733 "name": "nvmf_tgt_poll_group_003", 00:13:31.733 "admin_qpairs": 0, 00:13:31.733 "io_qpairs": 0, 00:13:31.733 "current_admin_qpairs": 0, 00:13:31.733 "current_io_qpairs": 0, 00:13:31.733 "pending_bdev_io": 0, 00:13:31.733 "completed_nvme_io": 0, 00:13:31.733 "transports": [] 00:13:31.733 } 00:13:31.733 ] 00:13:31.733 }' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.733 [2024-11-25 12:48:11.493172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:31.733 "tick_rate": 2400000000, 00:13:31.733 "poll_groups": [ 00:13:31.733 { 00:13:31.733 "name": "nvmf_tgt_poll_group_000", 00:13:31.733 "admin_qpairs": 0, 00:13:31.733 "io_qpairs": 0, 00:13:31.733 "current_admin_qpairs": 0, 00:13:31.733 "current_io_qpairs": 0, 00:13:31.733 "pending_bdev_io": 0, 00:13:31.733 "completed_nvme_io": 0, 00:13:31.733 "transports": [ 00:13:31.733 { 00:13:31.733 "trtype": "TCP" 00:13:31.733 } 00:13:31.733 ] 00:13:31.733 }, 00:13:31.733 { 00:13:31.733 "name": "nvmf_tgt_poll_group_001", 00:13:31.733 "admin_qpairs": 0, 00:13:31.733 "io_qpairs": 0, 00:13:31.733 "current_admin_qpairs": 0, 00:13:31.733 "current_io_qpairs": 0, 00:13:31.733 "pending_bdev_io": 0, 00:13:31.733 "completed_nvme_io": 0, 00:13:31.733 "transports": [ 00:13:31.733 { 00:13:31.733 "trtype": "TCP" 00:13:31.733 } 00:13:31.733 ] 00:13:31.733 }, 00:13:31.733 { 00:13:31.733 "name": "nvmf_tgt_poll_group_002", 00:13:31.733 "admin_qpairs": 0, 00:13:31.733 "io_qpairs": 0, 00:13:31.733 "current_admin_qpairs": 0, 00:13:31.733 "current_io_qpairs": 0, 00:13:31.733 "pending_bdev_io": 0, 00:13:31.733 "completed_nvme_io": 0, 00:13:31.733 "transports": [ 00:13:31.733 { 00:13:31.733 "trtype": "TCP" 00:13:31.733 } 00:13:31.733 ] 00:13:31.733 }, 00:13:31.733 { 00:13:31.733 "name": "nvmf_tgt_poll_group_003", 00:13:31.733 "admin_qpairs": 0, 00:13:31.733 "io_qpairs": 0, 00:13:31.733 "current_admin_qpairs": 0, 00:13:31.733 "current_io_qpairs": 0, 00:13:31.733 "pending_bdev_io": 0, 00:13:31.733 "completed_nvme_io": 0, 00:13:31.733 "transports": [ 00:13:31.733 { 00:13:31.733 "trtype": "TCP" 00:13:31.733 } 00:13:31.733 ] 00:13:31.733 } 00:13:31.733 ] 00:13:31.733 }' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:31.733 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:31.734 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:31.734 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:31.734 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:31.734 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:31.734 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:31.734 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:31.734 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.734 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.994 Malloc1 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.994 [2024-11-25 12:48:11.696238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:31.994 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:31.995 [2024-11-25 12:48:11.733212] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:31.995 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:31.995 could not add new controller: failed to write to nvme-fabrics device 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.995 12:48:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.380 12:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.380 12:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:33.380 12:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.380 12:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:33.380 12:48:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:35.925 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.926 [2024-11-25 12:48:15.470022] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:35.926 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:35.926 could not add new controller: failed to write to nvme-fabrics device 00:13:35.926 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:35.926 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.926 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.926 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.926 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:35.926 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.926 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.926 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.926 12:48:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.312 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.312 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:37.312 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.312 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:37.312 12:48:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:39.223 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:39.223 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:39.223 12:48:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.223 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:39.223 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.223 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:39.223 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.491 [2024-11-25 12:48:19.207208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.491 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.492 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.492 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:39.492 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.492 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.492 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.492 12:48:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.405 12:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:41.405 12:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:41.405 12:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.405 12:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:41.405 12:48:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.331 [2024-11-25 12:48:22.972440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.331 12:48:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.331 12:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.331 12:48:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:44.717 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:44.717 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:44.717 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.717 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:44.717 12:48:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.260 [2024-11-25 12:48:26.746922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.260 12:48:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:48.646 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:48.646 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:48.646 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.646 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:48.646 12:48:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:50.558 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:50.558 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:50.558 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.559 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.819 [2024-11-25 12:48:30.502443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.819 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.820 12:48:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:52.204 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:52.204 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:52.204 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.204 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:52.204 12:48:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:54.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.747 [2024-11-25 12:48:34.274148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.747 12:48:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.130 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:56.130 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:56.130 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.130 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:56.130 12:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:58.039 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:58.040 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:58.040 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.040 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:58.040 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.040 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:58.040 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.300 12:48:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.300 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.300 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.300 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:58.300 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.300 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.300 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.300 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.300 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 [2024-11-25 12:48:38.039382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 [2024-11-25 12:48:38.111547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 [2024-11-25 12:48:38.179747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.301 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 [2024-11-25 12:48:38.247943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 [2024-11-25 12:48:38.312171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:58.562 "tick_rate": 2400000000, 00:13:58.562 "poll_groups": [ 00:13:58.562 { 00:13:58.562 "name": "nvmf_tgt_poll_group_000", 00:13:58.562 "admin_qpairs": 0, 00:13:58.562 "io_qpairs": 224, 00:13:58.562 "current_admin_qpairs": 0, 00:13:58.562 "current_io_qpairs": 0, 00:13:58.562 "pending_bdev_io": 0, 00:13:58.562 "completed_nvme_io": 274, 00:13:58.562 "transports": [ 00:13:58.562 { 00:13:58.562 "trtype": "TCP" 00:13:58.562 } 00:13:58.562 ] 00:13:58.562 }, 00:13:58.562 { 00:13:58.562 "name": "nvmf_tgt_poll_group_001", 00:13:58.562 "admin_qpairs": 1, 00:13:58.562 "io_qpairs": 223, 00:13:58.562 "current_admin_qpairs": 0, 00:13:58.562 "current_io_qpairs": 0, 00:13:58.562 "pending_bdev_io": 0, 00:13:58.562 "completed_nvme_io": 503, 00:13:58.562 "transports": [ 00:13:58.562 { 00:13:58.562 "trtype": "TCP" 00:13:58.562 } 00:13:58.562 ] 00:13:58.562 }, 00:13:58.562 { 00:13:58.562 "name": "nvmf_tgt_poll_group_002", 00:13:58.562 "admin_qpairs": 6, 00:13:58.562 "io_qpairs": 218, 00:13:58.562 "current_admin_qpairs": 0, 00:13:58.562 "current_io_qpairs": 0, 00:13:58.562 "pending_bdev_io": 0, 00:13:58.562 "completed_nvme_io": 236, 00:13:58.562 "transports": [ 00:13:58.562 { 00:13:58.562 "trtype": "TCP" 00:13:58.562 } 00:13:58.562 ] 00:13:58.562 }, 00:13:58.562 { 00:13:58.562 "name": "nvmf_tgt_poll_group_003", 00:13:58.562 "admin_qpairs": 0, 00:13:58.562 "io_qpairs": 224, 00:13:58.562 "current_admin_qpairs": 0, 00:13:58.562 "current_io_qpairs": 0, 00:13:58.562 "pending_bdev_io": 0, 00:13:58.562 "completed_nvme_io": 226, 00:13:58.562 "transports": [ 00:13:58.562 { 00:13:58.562 "trtype": "TCP" 00:13:58.562 } 00:13:58.562 ] 00:13:58.562 } 00:13:58.562 ] 00:13:58.562 }' 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:58.562 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:58.563 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:58.563 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:58.563 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:58.563 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:58.563 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:58.563 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:58.563 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:58.823 rmmod nvme_tcp 00:13:58.823 rmmod nvme_fabrics 00:13:58.823 rmmod nvme_keyring 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 542142 ']' 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 542142 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 542142 ']' 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 542142 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 542142 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 542142' 00:13:58.823 killing process with pid 542142 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 542142 00:13:58.823 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 542142 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.084 12:48:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.001 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:01.001 00:14:01.001 real 0m39.240s 00:14:01.001 user 1m54.398s 00:14:01.001 sys 0m8.797s 00:14:01.001 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.001 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.001 ************************************ 00:14:01.001 END TEST nvmf_rpc 00:14:01.001 ************************************ 00:14:01.001 12:48:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:01.001 12:48:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:01.001 12:48:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.001 12:48:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.001 ************************************ 00:14:01.001 START TEST nvmf_invalid 00:14:01.001 ************************************ 00:14:01.001 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:01.263 * Looking for test storage... 00:14:01.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.263 12:48:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:01.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.263 --rc genhtml_branch_coverage=1 00:14:01.263 --rc genhtml_function_coverage=1 00:14:01.263 --rc genhtml_legend=1 00:14:01.263 --rc geninfo_all_blocks=1 00:14:01.263 --rc geninfo_unexecuted_blocks=1 00:14:01.263 00:14:01.263 ' 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:01.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.263 --rc genhtml_branch_coverage=1 00:14:01.263 --rc genhtml_function_coverage=1 00:14:01.263 --rc genhtml_legend=1 00:14:01.263 --rc geninfo_all_blocks=1 00:14:01.263 --rc geninfo_unexecuted_blocks=1 00:14:01.263 00:14:01.263 ' 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:01.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.263 --rc genhtml_branch_coverage=1 00:14:01.263 --rc genhtml_function_coverage=1 00:14:01.263 --rc genhtml_legend=1 00:14:01.263 --rc geninfo_all_blocks=1 00:14:01.263 --rc geninfo_unexecuted_blocks=1 00:14:01.263 00:14:01.263 ' 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:01.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.263 --rc genhtml_branch_coverage=1 00:14:01.263 --rc genhtml_function_coverage=1 00:14:01.263 --rc genhtml_legend=1 00:14:01.263 --rc geninfo_all_blocks=1 00:14:01.263 --rc geninfo_unexecuted_blocks=1 00:14:01.263 00:14:01.263 ' 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.263 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:01.264 12:48:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:11.265 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:11.265 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:11.265 Found net devices under 0000:31:00.0: cvl_0_0 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:11.265 Found net devices under 0000:31:00.1: cvl_0_1 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.265 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:11.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:14:11.266 00:14:11.266 --- 10.0.0.2 ping statistics --- 00:14:11.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.266 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:14:11.266 00:14:11.266 --- 10.0.0.1 ping statistics --- 00:14:11.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.266 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=552559 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 552559 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 552559 ']' 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.266 12:48:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:11.266 [2024-11-25 12:48:49.756888] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:14:11.266 [2024-11-25 12:48:49.756956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.266 [2024-11-25 12:48:49.848230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.266 [2024-11-25 12:48:49.889548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.266 [2024-11-25 12:48:49.889584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.266 [2024-11-25 12:48:49.889592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.266 [2024-11-25 12:48:49.889599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.266 [2024-11-25 12:48:49.889606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.266 [2024-11-25 12:48:49.891463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.266 [2024-11-25 12:48:49.891579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.266 [2024-11-25 12:48:49.891740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.266 [2024-11-25 12:48:49.891740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30442 00:14:11.266 [2024-11-25 12:48:50.770993] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:11.266 { 00:14:11.266 "nqn": "nqn.2016-06.io.spdk:cnode30442", 00:14:11.266 "tgt_name": "foobar", 00:14:11.266 "method": "nvmf_create_subsystem", 00:14:11.266 "req_id": 1 00:14:11.266 } 00:14:11.266 Got JSON-RPC error response 00:14:11.266 response: 00:14:11.266 { 00:14:11.266 "code": -32603, 00:14:11.266 "message": "Unable to find target foobar" 00:14:11.266 }' 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:11.266 { 00:14:11.266 "nqn": "nqn.2016-06.io.spdk:cnode30442", 00:14:11.266 "tgt_name": "foobar", 00:14:11.266 "method": "nvmf_create_subsystem", 00:14:11.266 "req_id": 1 00:14:11.266 } 00:14:11.266 Got JSON-RPC error response 00:14:11.266 response: 00:14:11.266 { 00:14:11.266 "code": -32603, 00:14:11.266 "message": "Unable to find target foobar" 00:14:11.266 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16505 00:14:11.266 [2024-11-25 12:48:50.963661] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16505: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:11.266 { 00:14:11.266 "nqn": "nqn.2016-06.io.spdk:cnode16505", 00:14:11.266 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:11.266 "method": "nvmf_create_subsystem", 00:14:11.266 "req_id": 1 00:14:11.266 } 00:14:11.266 Got JSON-RPC error response 00:14:11.266 response: 00:14:11.266 { 00:14:11.266 "code": -32602, 00:14:11.266 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:11.266 }' 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:11.266 { 00:14:11.266 "nqn": "nqn.2016-06.io.spdk:cnode16505", 00:14:11.266 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:11.266 "method": "nvmf_create_subsystem", 00:14:11.266 "req_id": 1 00:14:11.266 } 00:14:11.266 Got JSON-RPC error response 00:14:11.266 response: 00:14:11.266 { 00:14:11.266 "code": -32602, 00:14:11.266 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:11.266 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:11.266 12:48:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:11.266 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12069 00:14:11.266 [2024-11-25 12:48:51.156256] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12069: invalid model number 'SPDK_Controller' 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:11.528 { 00:14:11.528 "nqn": "nqn.2016-06.io.spdk:cnode12069", 00:14:11.528 "model_number": "SPDK_Controller\u001f", 00:14:11.528 "method": "nvmf_create_subsystem", 00:14:11.528 "req_id": 1 00:14:11.528 } 00:14:11.528 Got JSON-RPC error response 00:14:11.528 response: 00:14:11.528 { 00:14:11.528 "code": -32602, 00:14:11.528 "message": "Invalid MN SPDK_Controller\u001f" 00:14:11.528 }' 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:11.528 { 00:14:11.528 "nqn": "nqn.2016-06.io.spdk:cnode12069", 00:14:11.528 "model_number": "SPDK_Controller\u001f", 00:14:11.528 "method": "nvmf_create_subsystem", 00:14:11.528 "req_id": 1 00:14:11.528 } 00:14:11.528 Got JSON-RPC error response 00:14:11.528 response: 00:14:11.528 { 00:14:11.528 "code": -32602, 00:14:11.528 "message": "Invalid MN SPDK_Controller\u001f" 00:14:11.528 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:11.528 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 6 == \- ]] 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '69WNK[<7mm!k%psE|hmf' 00:14:11.529 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '69WNK[<7mm!k%psE|hmf' nqn.2016-06.io.spdk:cnode23065 00:14:11.791 [2024-11-25 12:48:51.513414] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23065: invalid serial number '69WNK[<7mm!k%psE|hmf' 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:11.791 { 00:14:11.791 "nqn": "nqn.2016-06.io.spdk:cnode23065", 00:14:11.791 "serial_number": "69WNK[<7mm!k%psE|h\u007fmf", 00:14:11.791 "method": "nvmf_create_subsystem", 00:14:11.791 "req_id": 1 00:14:11.791 } 00:14:11.791 Got JSON-RPC error response 00:14:11.791 response: 00:14:11.791 { 00:14:11.791 "code": -32602, 00:14:11.791 "message": "Invalid SN 69WNK[<7mm!k%psE|h\u007fmf" 00:14:11.791 }' 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:11.791 { 00:14:11.791 "nqn": "nqn.2016-06.io.spdk:cnode23065", 00:14:11.791 "serial_number": "69WNK[<7mm!k%psE|h\u007fmf", 00:14:11.791 "method": "nvmf_create_subsystem", 00:14:11.791 "req_id": 1 00:14:11.791 } 00:14:11.791 Got JSON-RPC error response 00:14:11.791 response: 00:14:11.791 { 00:14:11.791 "code": -32602, 00:14:11.791 "message": "Invalid SN 69WNK[<7mm!k%psE|h\u007fmf" 00:14:11.791 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.791 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:11.792 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.053 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:12.053 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:12.053 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:12.053 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.053 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Phu6ZL\5WbX29!D,Mf%imJ%'\''LtT^(npo.utGMB,|' 00:14:12.054 12:48:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Phu6ZL\5WbX29!D,Mf%imJ%'\''LtT^(npo.utGMB,|' nqn.2016-06.io.spdk:cnode23744 00:14:12.315 [2024-11-25 12:48:52.031086] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23744: invalid model number 'Phu6ZL\5WbX29!D,Mf%imJ%'LtT^(npo.utGMB,|' 00:14:12.315 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:12.315 { 00:14:12.315 "nqn": "nqn.2016-06.io.spdk:cnode23744", 00:14:12.315 "model_number": "Phu6ZL\\5WbX29!D,Mf%imJ%'\''LtT^(npo.u\u007ftGMB,|", 00:14:12.315 "method": "nvmf_create_subsystem", 00:14:12.315 "req_id": 1 00:14:12.315 } 00:14:12.315 Got JSON-RPC error response 00:14:12.315 response: 00:14:12.315 { 00:14:12.315 "code": -32602, 00:14:12.315 "message": "Invalid MN Phu6ZL\\5WbX29!D,Mf%imJ%'\''LtT^(npo.u\u007ftGMB,|" 00:14:12.315 }' 00:14:12.315 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:12.315 { 00:14:12.315 "nqn": "nqn.2016-06.io.spdk:cnode23744", 00:14:12.315 "model_number": "Phu6ZL\\5WbX29!D,Mf%imJ%'LtT^(npo.u\u007ftGMB,|", 00:14:12.315 "method": "nvmf_create_subsystem", 00:14:12.315 "req_id": 1 00:14:12.315 } 00:14:12.315 Got JSON-RPC error response 00:14:12.315 response: 00:14:12.315 { 00:14:12.315 "code": -32602, 00:14:12.315 "message": "Invalid MN Phu6ZL\\5WbX29!D,Mf%imJ%'LtT^(npo.u\u007ftGMB,|" 00:14:12.315 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:12.315 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:12.315 [2024-11-25 12:48:52.215756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.576 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:12.576 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:12.576 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:12.576 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:12.576 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:12.576 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:12.837 [2024-11-25 12:48:52.598392] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:12.837 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:12.837 { 00:14:12.837 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:12.837 "listen_address": { 00:14:12.837 "trtype": "tcp", 00:14:12.837 "traddr": "", 00:14:12.837 "trsvcid": "4421" 00:14:12.837 }, 00:14:12.837 "method": "nvmf_subsystem_remove_listener", 00:14:12.837 "req_id": 1 00:14:12.837 } 00:14:12.837 Got JSON-RPC error response 00:14:12.837 response: 00:14:12.837 { 00:14:12.837 "code": -32602, 00:14:12.837 "message": "Invalid parameters" 00:14:12.837 }' 00:14:12.837 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:12.837 { 00:14:12.837 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:12.837 "listen_address": { 00:14:12.837 "trtype": "tcp", 00:14:12.837 "traddr": "", 00:14:12.837 "trsvcid": "4421" 00:14:12.837 }, 00:14:12.837 "method": "nvmf_subsystem_remove_listener", 00:14:12.837 "req_id": 1 00:14:12.837 } 00:14:12.837 Got JSON-RPC error response 00:14:12.837 response: 00:14:12.837 { 00:14:12.837 "code": -32602, 00:14:12.837 "message": "Invalid parameters" 00:14:12.837 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:12.837 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31023 -i 0 00:14:13.098 [2024-11-25 12:48:52.790973] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31023: invalid cntlid range [0-65519] 00:14:13.098 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:13.098 { 00:14:13.098 "nqn": "nqn.2016-06.io.spdk:cnode31023", 00:14:13.098 "min_cntlid": 0, 00:14:13.098 "method": "nvmf_create_subsystem", 00:14:13.098 "req_id": 1 00:14:13.098 } 00:14:13.098 Got JSON-RPC error response 00:14:13.098 response: 00:14:13.098 { 00:14:13.098 "code": -32602, 00:14:13.098 "message": "Invalid cntlid range [0-65519]" 00:14:13.098 }' 00:14:13.098 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:13.098 { 00:14:13.098 "nqn": "nqn.2016-06.io.spdk:cnode31023", 00:14:13.098 "min_cntlid": 0, 00:14:13.098 "method": "nvmf_create_subsystem", 00:14:13.098 "req_id": 1 00:14:13.098 } 00:14:13.098 Got JSON-RPC error response 00:14:13.098 response: 00:14:13.098 { 00:14:13.098 "code": -32602, 00:14:13.098 "message": "Invalid cntlid range [0-65519]" 00:14:13.098 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:13.098 12:48:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6437 -i 65520 00:14:13.098 [2024-11-25 12:48:52.979597] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6437: invalid cntlid range [65520-65519] 00:14:13.359 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:13.359 { 00:14:13.359 "nqn": "nqn.2016-06.io.spdk:cnode6437", 00:14:13.359 "min_cntlid": 65520, 00:14:13.359 "method": "nvmf_create_subsystem", 00:14:13.359 "req_id": 1 00:14:13.359 } 00:14:13.359 Got JSON-RPC error response 00:14:13.359 response: 00:14:13.359 { 00:14:13.359 "code": -32602, 00:14:13.359 "message": "Invalid cntlid range [65520-65519]" 00:14:13.359 }' 00:14:13.359 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:13.359 { 00:14:13.359 "nqn": "nqn.2016-06.io.spdk:cnode6437", 00:14:13.359 "min_cntlid": 65520, 00:14:13.359 "method": "nvmf_create_subsystem", 00:14:13.359 "req_id": 1 00:14:13.359 } 00:14:13.359 Got JSON-RPC error response 00:14:13.359 response: 00:14:13.359 { 00:14:13.359 "code": -32602, 00:14:13.359 "message": "Invalid cntlid range [65520-65519]" 00:14:13.359 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:13.359 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27881 -I 0 00:14:13.359 [2024-11-25 12:48:53.164138] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27881: invalid cntlid range [1-0] 00:14:13.359 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:13.359 { 00:14:13.359 "nqn": "nqn.2016-06.io.spdk:cnode27881", 00:14:13.359 "max_cntlid": 0, 00:14:13.359 "method": "nvmf_create_subsystem", 00:14:13.359 "req_id": 1 00:14:13.359 } 00:14:13.359 Got JSON-RPC error response 00:14:13.360 response: 00:14:13.360 { 00:14:13.360 "code": -32602, 00:14:13.360 "message": "Invalid cntlid range [1-0]" 00:14:13.360 }' 00:14:13.360 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:13.360 { 00:14:13.360 "nqn": "nqn.2016-06.io.spdk:cnode27881", 00:14:13.360 "max_cntlid": 0, 00:14:13.360 "method": "nvmf_create_subsystem", 00:14:13.360 "req_id": 1 00:14:13.360 } 00:14:13.360 Got JSON-RPC error response 00:14:13.360 response: 00:14:13.360 { 00:14:13.360 "code": -32602, 00:14:13.360 "message": "Invalid cntlid range [1-0]" 00:14:13.360 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:13.360 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20816 -I 65520 00:14:13.620 [2024-11-25 12:48:53.348720] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20816: invalid cntlid range [1-65520] 00:14:13.620 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:13.620 { 00:14:13.620 "nqn": "nqn.2016-06.io.spdk:cnode20816", 00:14:13.620 "max_cntlid": 65520, 00:14:13.620 "method": "nvmf_create_subsystem", 00:14:13.620 "req_id": 1 00:14:13.620 } 00:14:13.620 Got JSON-RPC error response 00:14:13.620 response: 00:14:13.620 { 00:14:13.620 "code": -32602, 00:14:13.620 "message": "Invalid cntlid range [1-65520]" 00:14:13.620 }' 00:14:13.620 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:13.620 { 00:14:13.620 "nqn": "nqn.2016-06.io.spdk:cnode20816", 00:14:13.620 "max_cntlid": 65520, 00:14:13.620 "method": "nvmf_create_subsystem", 00:14:13.620 "req_id": 1 00:14:13.620 } 00:14:13.620 Got JSON-RPC error response 00:14:13.620 response: 00:14:13.620 { 00:14:13.620 "code": -32602, 00:14:13.620 "message": "Invalid cntlid range [1-65520]" 00:14:13.620 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:13.620 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30315 -i 6 -I 5 00:14:13.882 [2024-11-25 12:48:53.529297] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30315: invalid cntlid range [6-5] 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:13.882 { 00:14:13.882 "nqn": "nqn.2016-06.io.spdk:cnode30315", 00:14:13.882 "min_cntlid": 6, 00:14:13.882 "max_cntlid": 5, 00:14:13.882 "method": "nvmf_create_subsystem", 00:14:13.882 "req_id": 1 00:14:13.882 } 00:14:13.882 Got JSON-RPC error response 00:14:13.882 response: 00:14:13.882 { 00:14:13.882 "code": -32602, 00:14:13.882 "message": "Invalid cntlid range [6-5]" 00:14:13.882 }' 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:13.882 { 00:14:13.882 "nqn": "nqn.2016-06.io.spdk:cnode30315", 00:14:13.882 "min_cntlid": 6, 00:14:13.882 "max_cntlid": 5, 00:14:13.882 "method": "nvmf_create_subsystem", 00:14:13.882 "req_id": 1 00:14:13.882 } 00:14:13.882 Got JSON-RPC error response 00:14:13.882 response: 00:14:13.882 { 00:14:13.882 "code": -32602, 00:14:13.882 "message": "Invalid cntlid range [6-5]" 00:14:13.882 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:13.882 { 00:14:13.882 "name": "foobar", 00:14:13.882 "method": "nvmf_delete_target", 00:14:13.882 "req_id": 1 00:14:13.882 } 00:14:13.882 Got JSON-RPC error response 00:14:13.882 response: 00:14:13.882 { 00:14:13.882 "code": -32602, 00:14:13.882 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:13.882 }' 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:13.882 { 00:14:13.882 "name": "foobar", 00:14:13.882 "method": "nvmf_delete_target", 00:14:13.882 "req_id": 1 00:14:13.882 } 00:14:13.882 Got JSON-RPC error response 00:14:13.882 response: 00:14:13.882 { 00:14:13.882 "code": -32602, 00:14:13.882 "message": "The specified target doesn't exist, cannot delete it." 00:14:13.882 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:13.882 rmmod nvme_tcp 00:14:13.882 rmmod nvme_fabrics 00:14:13.882 rmmod nvme_keyring 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 552559 ']' 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 552559 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 552559 ']' 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 552559 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.882 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 552559 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 552559' 00:14:14.144 killing process with pid 552559 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 552559 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 552559 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.144 12:48:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.207 12:48:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:16.207 00:14:16.207 real 0m15.104s 00:14:16.207 user 0m20.978s 00:14:16.207 sys 0m7.384s 00:14:16.207 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.207 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:16.207 ************************************ 00:14:16.207 END TEST nvmf_invalid 00:14:16.207 ************************************ 00:14:16.207 12:48:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:16.207 12:48:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:16.207 12:48:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.207 12:48:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.207 ************************************ 00:14:16.207 START TEST nvmf_connect_stress 00:14:16.207 ************************************ 00:14:16.207 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:16.469 * Looking for test storage... 00:14:16.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.469 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:16.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.470 --rc genhtml_branch_coverage=1 00:14:16.470 --rc genhtml_function_coverage=1 00:14:16.470 --rc genhtml_legend=1 00:14:16.470 --rc geninfo_all_blocks=1 00:14:16.470 --rc geninfo_unexecuted_blocks=1 00:14:16.470 00:14:16.470 ' 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:16.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.470 --rc genhtml_branch_coverage=1 00:14:16.470 --rc genhtml_function_coverage=1 00:14:16.470 --rc genhtml_legend=1 00:14:16.470 --rc geninfo_all_blocks=1 00:14:16.470 --rc geninfo_unexecuted_blocks=1 00:14:16.470 00:14:16.470 ' 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:16.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.470 --rc genhtml_branch_coverage=1 00:14:16.470 --rc genhtml_function_coverage=1 00:14:16.470 --rc genhtml_legend=1 00:14:16.470 --rc geninfo_all_blocks=1 00:14:16.470 --rc geninfo_unexecuted_blocks=1 00:14:16.470 00:14:16.470 ' 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:16.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.470 --rc genhtml_branch_coverage=1 00:14:16.470 --rc genhtml_function_coverage=1 00:14:16.470 --rc genhtml_legend=1 00:14:16.470 --rc geninfo_all_blocks=1 00:14:16.470 --rc geninfo_unexecuted_blocks=1 00:14:16.470 00:14:16.470 ' 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.470 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:16.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:16.471 12:48:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.620 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:24.620 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:24.620 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:24.620 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:24.620 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:24.620 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:24.620 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:24.621 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:24.621 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:24.621 Found net devices under 0000:31:00.0: cvl_0_0 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:24.621 Found net devices under 0000:31:00.1: cvl_0_1 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:24.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:14:24.621 00:14:24.621 --- 10.0.0.2 ping statistics --- 00:14:24.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.621 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:14:24.621 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:14:24.622 00:14:24.622 --- 10.0.0.1 ping statistics --- 00:14:24.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.622 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=558100 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 558100 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 558100 ']' 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.622 12:49:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.883 [2024-11-25 12:49:04.536033] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:14:24.883 [2024-11-25 12:49:04.536103] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.883 [2024-11-25 12:49:04.645633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:24.883 [2024-11-25 12:49:04.698266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.883 [2024-11-25 12:49:04.698317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.883 [2024-11-25 12:49:04.698326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.883 [2024-11-25 12:49:04.698334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.883 [2024-11-25 12:49:04.698340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.883 [2024-11-25 12:49:04.700202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.883 [2024-11-25 12:49:04.700368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.883 [2024-11-25 12:49:04.700368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.455 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.455 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:25.455 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:25.455 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:25.455 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.716 [2024-11-25 12:49:05.397723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.716 [2024-11-25 12:49:05.422200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.716 NULL1 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=558452 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.716 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.977 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.977 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:25.977 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.977 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.977 12:49:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.547 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.547 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:26.547 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.547 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.547 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.808 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.808 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:26.808 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.808 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.808 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.069 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.069 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:27.069 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.069 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.069 12:49:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.330 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.330 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:27.330 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.330 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.330 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.590 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.590 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:27.590 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.590 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.590 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.160 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.160 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:28.160 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.160 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.160 12:49:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.421 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.421 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:28.421 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.421 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.421 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.682 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.682 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:28.682 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.682 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.682 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.942 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.942 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:28.942 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.942 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.942 12:49:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.516 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.516 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:29.516 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.516 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.516 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.776 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.776 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:29.776 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.776 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.776 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.036 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.036 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:30.036 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.036 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.036 12:49:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.297 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.297 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:30.297 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.297 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.297 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:30.560 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.560 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:30.560 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.560 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.560 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.130 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.130 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:31.130 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.130 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.130 12:49:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.390 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.390 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:31.390 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.390 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.390 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.651 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.651 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:31.651 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.651 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.651 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.912 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.912 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:31.912 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.912 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.912 12:49:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.173 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.173 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:32.173 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.173 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.173 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.743 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.743 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:32.743 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.743 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.743 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.003 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.003 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:33.003 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.003 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.003 12:49:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.264 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.264 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:33.264 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.264 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.264 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.524 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.524 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:33.524 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.524 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.525 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.785 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.785 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:33.785 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.785 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.785 12:49:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.380 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.380 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:34.380 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.380 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.380 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.640 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.640 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:34.640 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.640 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.640 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.900 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.900 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:34.900 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.900 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.900 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.160 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.160 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:35.160 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.160 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.160 12:49:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.421 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.421 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:35.421 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.421 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.421 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.680 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 558452 00:14:35.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (558452) - No such process 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 558452 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:35.940 rmmod nvme_tcp 00:14:35.940 rmmod nvme_fabrics 00:14:35.940 rmmod nvme_keyring 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 558100 ']' 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 558100 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 558100 ']' 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 558100 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 558100 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 558100' 00:14:35.940 killing process with pid 558100 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 558100 00:14:35.940 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 558100 00:14:36.200 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:36.201 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:36.201 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:36.201 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:36.201 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:36.201 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:36.201 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:36.201 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:36.201 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:36.201 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.201 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.201 12:49:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.112 12:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:38.112 00:14:38.112 real 0m21.881s 00:14:38.112 user 0m42.469s 00:14:38.112 sys 0m9.558s 00:14:38.112 12:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.112 12:49:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.112 ************************************ 00:14:38.112 END TEST nvmf_connect_stress 00:14:38.112 ************************************ 00:14:38.112 12:49:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:38.112 12:49:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:38.112 12:49:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.112 12:49:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.374 ************************************ 00:14:38.374 START TEST nvmf_fused_ordering 00:14:38.374 ************************************ 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:38.374 * Looking for test storage... 00:14:38.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:38.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.374 --rc genhtml_branch_coverage=1 00:14:38.374 --rc genhtml_function_coverage=1 00:14:38.374 --rc genhtml_legend=1 00:14:38.374 --rc geninfo_all_blocks=1 00:14:38.374 --rc geninfo_unexecuted_blocks=1 00:14:38.374 00:14:38.374 ' 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:38.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.374 --rc genhtml_branch_coverage=1 00:14:38.374 --rc genhtml_function_coverage=1 00:14:38.374 --rc genhtml_legend=1 00:14:38.374 --rc geninfo_all_blocks=1 00:14:38.374 --rc geninfo_unexecuted_blocks=1 00:14:38.374 00:14:38.374 ' 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:38.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.374 --rc genhtml_branch_coverage=1 00:14:38.374 --rc genhtml_function_coverage=1 00:14:38.374 --rc genhtml_legend=1 00:14:38.374 --rc geninfo_all_blocks=1 00:14:38.374 --rc geninfo_unexecuted_blocks=1 00:14:38.374 00:14:38.374 ' 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:38.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.374 --rc genhtml_branch_coverage=1 00:14:38.374 --rc genhtml_function_coverage=1 00:14:38.374 --rc genhtml_legend=1 00:14:38.374 --rc geninfo_all_blocks=1 00:14:38.374 --rc geninfo_unexecuted_blocks=1 00:14:38.374 00:14:38.374 ' 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.374 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.375 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.375 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.375 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.375 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.375 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.375 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.375 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:38.635 12:49:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:46.777 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:46.778 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:46.778 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:46.778 Found net devices under 0000:31:00.0: cvl_0_0 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:46.778 Found net devices under 0000:31:00.1: cvl_0_1 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:46.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:14:46.778 00:14:46.778 --- 10.0.0.2 ping statistics --- 00:14:46.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.778 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:46.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:14:46.778 00:14:46.778 --- 10.0.0.1 ping statistics --- 00:14:46.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.778 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=565174 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 565174 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 565174 ']' 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.778 12:49:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.039 [2024-11-25 12:49:26.722998] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:14:47.039 [2024-11-25 12:49:26.723052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.039 [2024-11-25 12:49:26.825822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.039 [2024-11-25 12:49:26.875308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.039 [2024-11-25 12:49:26.875354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.039 [2024-11-25 12:49:26.875363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.039 [2024-11-25 12:49:26.875370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.039 [2024-11-25 12:49:26.875377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.039 [2024-11-25 12:49:26.876205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.980 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.980 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:47.980 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:47.980 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:47.980 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.980 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.980 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:47.980 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.980 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.981 [2024-11-25 12:49:27.592866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.981 [2024-11-25 12:49:27.617180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.981 NULL1 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.981 12:49:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:47.981 [2024-11-25 12:49:27.687707] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:14:47.981 [2024-11-25 12:49:27.687750] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565278 ] 00:14:48.242 Attached to nqn.2016-06.io.spdk:cnode1 00:14:48.242 Namespace ID: 1 size: 1GB 00:14:48.242 fused_ordering(0) 00:14:48.242 fused_ordering(1) 00:14:48.243 fused_ordering(2) 00:14:48.243 fused_ordering(3) 00:14:48.243 fused_ordering(4) 00:14:48.243 fused_ordering(5) 00:14:48.243 fused_ordering(6) 00:14:48.243 fused_ordering(7) 00:14:48.243 fused_ordering(8) 00:14:48.243 fused_ordering(9) 00:14:48.243 fused_ordering(10) 00:14:48.243 fused_ordering(11) 00:14:48.243 fused_ordering(12) 00:14:48.243 fused_ordering(13) 00:14:48.243 fused_ordering(14) 00:14:48.243 fused_ordering(15) 00:14:48.243 fused_ordering(16) 00:14:48.243 fused_ordering(17) 00:14:48.243 fused_ordering(18) 00:14:48.243 fused_ordering(19) 00:14:48.243 fused_ordering(20) 00:14:48.243 fused_ordering(21) 00:14:48.243 fused_ordering(22) 00:14:48.243 fused_ordering(23) 00:14:48.243 fused_ordering(24) 00:14:48.243 fused_ordering(25) 00:14:48.243 fused_ordering(26) 00:14:48.243 fused_ordering(27) 00:14:48.243 fused_ordering(28) 00:14:48.243 fused_ordering(29) 00:14:48.243 fused_ordering(30) 00:14:48.243 fused_ordering(31) 00:14:48.243 fused_ordering(32) 00:14:48.243 fused_ordering(33) 00:14:48.243 fused_ordering(34) 00:14:48.243 fused_ordering(35) 00:14:48.243 fused_ordering(36) 00:14:48.243 fused_ordering(37) 00:14:48.243 fused_ordering(38) 00:14:48.243 fused_ordering(39) 00:14:48.243 fused_ordering(40) 00:14:48.243 fused_ordering(41) 00:14:48.243 fused_ordering(42) 00:14:48.243 fused_ordering(43) 00:14:48.243 fused_ordering(44) 00:14:48.243 fused_ordering(45) 00:14:48.243 fused_ordering(46) 00:14:48.243 fused_ordering(47) 00:14:48.243 fused_ordering(48) 00:14:48.243 fused_ordering(49) 00:14:48.243 fused_ordering(50) 00:14:48.243 fused_ordering(51) 00:14:48.243 fused_ordering(52) 00:14:48.243 fused_ordering(53) 00:14:48.243 fused_ordering(54) 00:14:48.243 fused_ordering(55) 00:14:48.243 fused_ordering(56) 00:14:48.243 fused_ordering(57) 00:14:48.243 fused_ordering(58) 00:14:48.243 fused_ordering(59) 00:14:48.243 fused_ordering(60) 00:14:48.243 fused_ordering(61) 00:14:48.243 fused_ordering(62) 00:14:48.243 fused_ordering(63) 00:14:48.243 fused_ordering(64) 00:14:48.243 fused_ordering(65) 00:14:48.243 fused_ordering(66) 00:14:48.243 fused_ordering(67) 00:14:48.243 fused_ordering(68) 00:14:48.243 fused_ordering(69) 00:14:48.243 fused_ordering(70) 00:14:48.243 fused_ordering(71) 00:14:48.243 fused_ordering(72) 00:14:48.243 fused_ordering(73) 00:14:48.243 fused_ordering(74) 00:14:48.243 fused_ordering(75) 00:14:48.243 fused_ordering(76) 00:14:48.243 fused_ordering(77) 00:14:48.243 fused_ordering(78) 00:14:48.243 fused_ordering(79) 00:14:48.243 fused_ordering(80) 00:14:48.243 fused_ordering(81) 00:14:48.243 fused_ordering(82) 00:14:48.243 fused_ordering(83) 00:14:48.243 fused_ordering(84) 00:14:48.243 fused_ordering(85) 00:14:48.243 fused_ordering(86) 00:14:48.243 fused_ordering(87) 00:14:48.243 fused_ordering(88) 00:14:48.243 fused_ordering(89) 00:14:48.243 fused_ordering(90) 00:14:48.243 fused_ordering(91) 00:14:48.243 fused_ordering(92) 00:14:48.243 fused_ordering(93) 00:14:48.243 fused_ordering(94) 00:14:48.243 fused_ordering(95) 00:14:48.243 fused_ordering(96) 00:14:48.243 fused_ordering(97) 00:14:48.243 fused_ordering(98) 00:14:48.243 fused_ordering(99) 00:14:48.243 fused_ordering(100) 00:14:48.243 fused_ordering(101) 00:14:48.243 fused_ordering(102) 00:14:48.243 fused_ordering(103) 00:14:48.243 fused_ordering(104) 00:14:48.243 fused_ordering(105) 00:14:48.243 fused_ordering(106) 00:14:48.243 fused_ordering(107) 00:14:48.243 fused_ordering(108) 00:14:48.243 fused_ordering(109) 00:14:48.243 fused_ordering(110) 00:14:48.243 fused_ordering(111) 00:14:48.243 fused_ordering(112) 00:14:48.243 fused_ordering(113) 00:14:48.243 fused_ordering(114) 00:14:48.243 fused_ordering(115) 00:14:48.243 fused_ordering(116) 00:14:48.243 fused_ordering(117) 00:14:48.243 fused_ordering(118) 00:14:48.243 fused_ordering(119) 00:14:48.243 fused_ordering(120) 00:14:48.243 fused_ordering(121) 00:14:48.243 fused_ordering(122) 00:14:48.243 fused_ordering(123) 00:14:48.243 fused_ordering(124) 00:14:48.243 fused_ordering(125) 00:14:48.243 fused_ordering(126) 00:14:48.243 fused_ordering(127) 00:14:48.243 fused_ordering(128) 00:14:48.243 fused_ordering(129) 00:14:48.243 fused_ordering(130) 00:14:48.243 fused_ordering(131) 00:14:48.243 fused_ordering(132) 00:14:48.243 fused_ordering(133) 00:14:48.243 fused_ordering(134) 00:14:48.243 fused_ordering(135) 00:14:48.243 fused_ordering(136) 00:14:48.243 fused_ordering(137) 00:14:48.243 fused_ordering(138) 00:14:48.243 fused_ordering(139) 00:14:48.243 fused_ordering(140) 00:14:48.243 fused_ordering(141) 00:14:48.243 fused_ordering(142) 00:14:48.243 fused_ordering(143) 00:14:48.243 fused_ordering(144) 00:14:48.243 fused_ordering(145) 00:14:48.243 fused_ordering(146) 00:14:48.243 fused_ordering(147) 00:14:48.243 fused_ordering(148) 00:14:48.243 fused_ordering(149) 00:14:48.243 fused_ordering(150) 00:14:48.243 fused_ordering(151) 00:14:48.243 fused_ordering(152) 00:14:48.243 fused_ordering(153) 00:14:48.243 fused_ordering(154) 00:14:48.243 fused_ordering(155) 00:14:48.243 fused_ordering(156) 00:14:48.243 fused_ordering(157) 00:14:48.243 fused_ordering(158) 00:14:48.243 fused_ordering(159) 00:14:48.243 fused_ordering(160) 00:14:48.243 fused_ordering(161) 00:14:48.243 fused_ordering(162) 00:14:48.243 fused_ordering(163) 00:14:48.243 fused_ordering(164) 00:14:48.243 fused_ordering(165) 00:14:48.243 fused_ordering(166) 00:14:48.243 fused_ordering(167) 00:14:48.243 fused_ordering(168) 00:14:48.243 fused_ordering(169) 00:14:48.243 fused_ordering(170) 00:14:48.243 fused_ordering(171) 00:14:48.243 fused_ordering(172) 00:14:48.243 fused_ordering(173) 00:14:48.243 fused_ordering(174) 00:14:48.243 fused_ordering(175) 00:14:48.243 fused_ordering(176) 00:14:48.243 fused_ordering(177) 00:14:48.243 fused_ordering(178) 00:14:48.243 fused_ordering(179) 00:14:48.243 fused_ordering(180) 00:14:48.244 fused_ordering(181) 00:14:48.244 fused_ordering(182) 00:14:48.244 fused_ordering(183) 00:14:48.244 fused_ordering(184) 00:14:48.244 fused_ordering(185) 00:14:48.244 fused_ordering(186) 00:14:48.244 fused_ordering(187) 00:14:48.244 fused_ordering(188) 00:14:48.244 fused_ordering(189) 00:14:48.244 fused_ordering(190) 00:14:48.244 fused_ordering(191) 00:14:48.244 fused_ordering(192) 00:14:48.244 fused_ordering(193) 00:14:48.244 fused_ordering(194) 00:14:48.244 fused_ordering(195) 00:14:48.244 fused_ordering(196) 00:14:48.244 fused_ordering(197) 00:14:48.244 fused_ordering(198) 00:14:48.244 fused_ordering(199) 00:14:48.244 fused_ordering(200) 00:14:48.244 fused_ordering(201) 00:14:48.244 fused_ordering(202) 00:14:48.244 fused_ordering(203) 00:14:48.244 fused_ordering(204) 00:14:48.244 fused_ordering(205) 00:14:48.815 fused_ordering(206) 00:14:48.815 fused_ordering(207) 00:14:48.815 fused_ordering(208) 00:14:48.815 fused_ordering(209) 00:14:48.815 fused_ordering(210) 00:14:48.815 fused_ordering(211) 00:14:48.815 fused_ordering(212) 00:14:48.815 fused_ordering(213) 00:14:48.815 fused_ordering(214) 00:14:48.815 fused_ordering(215) 00:14:48.815 fused_ordering(216) 00:14:48.815 fused_ordering(217) 00:14:48.815 fused_ordering(218) 00:14:48.815 fused_ordering(219) 00:14:48.815 fused_ordering(220) 00:14:48.815 fused_ordering(221) 00:14:48.815 fused_ordering(222) 00:14:48.815 fused_ordering(223) 00:14:48.815 fused_ordering(224) 00:14:48.815 fused_ordering(225) 00:14:48.815 fused_ordering(226) 00:14:48.815 fused_ordering(227) 00:14:48.815 fused_ordering(228) 00:14:48.815 fused_ordering(229) 00:14:48.815 fused_ordering(230) 00:14:48.815 fused_ordering(231) 00:14:48.815 fused_ordering(232) 00:14:48.815 fused_ordering(233) 00:14:48.816 fused_ordering(234) 00:14:48.816 fused_ordering(235) 00:14:48.816 fused_ordering(236) 00:14:48.816 fused_ordering(237) 00:14:48.816 fused_ordering(238) 00:14:48.816 fused_ordering(239) 00:14:48.816 fused_ordering(240) 00:14:48.816 fused_ordering(241) 00:14:48.816 fused_ordering(242) 00:14:48.816 fused_ordering(243) 00:14:48.816 fused_ordering(244) 00:14:48.816 fused_ordering(245) 00:14:48.816 fused_ordering(246) 00:14:48.816 fused_ordering(247) 00:14:48.816 fused_ordering(248) 00:14:48.816 fused_ordering(249) 00:14:48.816 fused_ordering(250) 00:14:48.816 fused_ordering(251) 00:14:48.816 fused_ordering(252) 00:14:48.816 fused_ordering(253) 00:14:48.816 fused_ordering(254) 00:14:48.816 fused_ordering(255) 00:14:48.816 fused_ordering(256) 00:14:48.816 fused_ordering(257) 00:14:48.816 fused_ordering(258) 00:14:48.816 fused_ordering(259) 00:14:48.816 fused_ordering(260) 00:14:48.816 fused_ordering(261) 00:14:48.816 fused_ordering(262) 00:14:48.816 fused_ordering(263) 00:14:48.816 fused_ordering(264) 00:14:48.816 fused_ordering(265) 00:14:48.816 fused_ordering(266) 00:14:48.816 fused_ordering(267) 00:14:48.816 fused_ordering(268) 00:14:48.816 fused_ordering(269) 00:14:48.816 fused_ordering(270) 00:14:48.816 fused_ordering(271) 00:14:48.816 fused_ordering(272) 00:14:48.816 fused_ordering(273) 00:14:48.816 fused_ordering(274) 00:14:48.816 fused_ordering(275) 00:14:48.816 fused_ordering(276) 00:14:48.816 fused_ordering(277) 00:14:48.816 fused_ordering(278) 00:14:48.816 fused_ordering(279) 00:14:48.816 fused_ordering(280) 00:14:48.816 fused_ordering(281) 00:14:48.816 fused_ordering(282) 00:14:48.816 fused_ordering(283) 00:14:48.816 fused_ordering(284) 00:14:48.816 fused_ordering(285) 00:14:48.816 fused_ordering(286) 00:14:48.816 fused_ordering(287) 00:14:48.816 fused_ordering(288) 00:14:48.816 fused_ordering(289) 00:14:48.816 fused_ordering(290) 00:14:48.816 fused_ordering(291) 00:14:48.816 fused_ordering(292) 00:14:48.816 fused_ordering(293) 00:14:48.816 fused_ordering(294) 00:14:48.816 fused_ordering(295) 00:14:48.816 fused_ordering(296) 00:14:48.816 fused_ordering(297) 00:14:48.816 fused_ordering(298) 00:14:48.816 fused_ordering(299) 00:14:48.816 fused_ordering(300) 00:14:48.816 fused_ordering(301) 00:14:48.816 fused_ordering(302) 00:14:48.816 fused_ordering(303) 00:14:48.816 fused_ordering(304) 00:14:48.816 fused_ordering(305) 00:14:48.816 fused_ordering(306) 00:14:48.816 fused_ordering(307) 00:14:48.816 fused_ordering(308) 00:14:48.816 fused_ordering(309) 00:14:48.816 fused_ordering(310) 00:14:48.816 fused_ordering(311) 00:14:48.816 fused_ordering(312) 00:14:48.816 fused_ordering(313) 00:14:48.816 fused_ordering(314) 00:14:48.816 fused_ordering(315) 00:14:48.816 fused_ordering(316) 00:14:48.816 fused_ordering(317) 00:14:48.816 fused_ordering(318) 00:14:48.816 fused_ordering(319) 00:14:48.816 fused_ordering(320) 00:14:48.816 fused_ordering(321) 00:14:48.816 fused_ordering(322) 00:14:48.816 fused_ordering(323) 00:14:48.816 fused_ordering(324) 00:14:48.816 fused_ordering(325) 00:14:48.816 fused_ordering(326) 00:14:48.816 fused_ordering(327) 00:14:48.816 fused_ordering(328) 00:14:48.816 fused_ordering(329) 00:14:48.816 fused_ordering(330) 00:14:48.816 fused_ordering(331) 00:14:48.816 fused_ordering(332) 00:14:48.816 fused_ordering(333) 00:14:48.816 fused_ordering(334) 00:14:48.816 fused_ordering(335) 00:14:48.816 fused_ordering(336) 00:14:48.816 fused_ordering(337) 00:14:48.816 fused_ordering(338) 00:14:48.816 fused_ordering(339) 00:14:48.816 fused_ordering(340) 00:14:48.816 fused_ordering(341) 00:14:48.816 fused_ordering(342) 00:14:48.816 fused_ordering(343) 00:14:48.816 fused_ordering(344) 00:14:48.816 fused_ordering(345) 00:14:48.816 fused_ordering(346) 00:14:48.816 fused_ordering(347) 00:14:48.816 fused_ordering(348) 00:14:48.816 fused_ordering(349) 00:14:48.816 fused_ordering(350) 00:14:48.816 fused_ordering(351) 00:14:48.816 fused_ordering(352) 00:14:48.816 fused_ordering(353) 00:14:48.816 fused_ordering(354) 00:14:48.816 fused_ordering(355) 00:14:48.816 fused_ordering(356) 00:14:48.816 fused_ordering(357) 00:14:48.816 fused_ordering(358) 00:14:48.816 fused_ordering(359) 00:14:48.816 fused_ordering(360) 00:14:48.816 fused_ordering(361) 00:14:48.816 fused_ordering(362) 00:14:48.816 fused_ordering(363) 00:14:48.816 fused_ordering(364) 00:14:48.816 fused_ordering(365) 00:14:48.816 fused_ordering(366) 00:14:48.816 fused_ordering(367) 00:14:48.816 fused_ordering(368) 00:14:48.816 fused_ordering(369) 00:14:48.816 fused_ordering(370) 00:14:48.816 fused_ordering(371) 00:14:48.816 fused_ordering(372) 00:14:48.816 fused_ordering(373) 00:14:48.816 fused_ordering(374) 00:14:48.816 fused_ordering(375) 00:14:48.816 fused_ordering(376) 00:14:48.816 fused_ordering(377) 00:14:48.816 fused_ordering(378) 00:14:48.816 fused_ordering(379) 00:14:48.816 fused_ordering(380) 00:14:48.816 fused_ordering(381) 00:14:48.816 fused_ordering(382) 00:14:48.816 fused_ordering(383) 00:14:48.816 fused_ordering(384) 00:14:48.816 fused_ordering(385) 00:14:48.816 fused_ordering(386) 00:14:48.816 fused_ordering(387) 00:14:48.816 fused_ordering(388) 00:14:48.816 fused_ordering(389) 00:14:48.816 fused_ordering(390) 00:14:48.816 fused_ordering(391) 00:14:48.816 fused_ordering(392) 00:14:48.816 fused_ordering(393) 00:14:48.816 fused_ordering(394) 00:14:48.816 fused_ordering(395) 00:14:48.816 fused_ordering(396) 00:14:48.816 fused_ordering(397) 00:14:48.816 fused_ordering(398) 00:14:48.816 fused_ordering(399) 00:14:48.816 fused_ordering(400) 00:14:48.816 fused_ordering(401) 00:14:48.816 fused_ordering(402) 00:14:48.816 fused_ordering(403) 00:14:48.816 fused_ordering(404) 00:14:48.816 fused_ordering(405) 00:14:48.816 fused_ordering(406) 00:14:48.816 fused_ordering(407) 00:14:48.816 fused_ordering(408) 00:14:48.816 fused_ordering(409) 00:14:48.816 fused_ordering(410) 00:14:49.076 fused_ordering(411) 00:14:49.076 fused_ordering(412) 00:14:49.076 fused_ordering(413) 00:14:49.076 fused_ordering(414) 00:14:49.076 fused_ordering(415) 00:14:49.076 fused_ordering(416) 00:14:49.076 fused_ordering(417) 00:14:49.076 fused_ordering(418) 00:14:49.076 fused_ordering(419) 00:14:49.076 fused_ordering(420) 00:14:49.076 fused_ordering(421) 00:14:49.076 fused_ordering(422) 00:14:49.076 fused_ordering(423) 00:14:49.076 fused_ordering(424) 00:14:49.076 fused_ordering(425) 00:14:49.076 fused_ordering(426) 00:14:49.076 fused_ordering(427) 00:14:49.076 fused_ordering(428) 00:14:49.076 fused_ordering(429) 00:14:49.076 fused_ordering(430) 00:14:49.076 fused_ordering(431) 00:14:49.076 fused_ordering(432) 00:14:49.076 fused_ordering(433) 00:14:49.076 fused_ordering(434) 00:14:49.076 fused_ordering(435) 00:14:49.076 fused_ordering(436) 00:14:49.076 fused_ordering(437) 00:14:49.076 fused_ordering(438) 00:14:49.076 fused_ordering(439) 00:14:49.076 fused_ordering(440) 00:14:49.076 fused_ordering(441) 00:14:49.076 fused_ordering(442) 00:14:49.076 fused_ordering(443) 00:14:49.076 fused_ordering(444) 00:14:49.076 fused_ordering(445) 00:14:49.076 fused_ordering(446) 00:14:49.076 fused_ordering(447) 00:14:49.076 fused_ordering(448) 00:14:49.076 fused_ordering(449) 00:14:49.076 fused_ordering(450) 00:14:49.076 fused_ordering(451) 00:14:49.076 fused_ordering(452) 00:14:49.076 fused_ordering(453) 00:14:49.076 fused_ordering(454) 00:14:49.076 fused_ordering(455) 00:14:49.076 fused_ordering(456) 00:14:49.076 fused_ordering(457) 00:14:49.076 fused_ordering(458) 00:14:49.076 fused_ordering(459) 00:14:49.076 fused_ordering(460) 00:14:49.076 fused_ordering(461) 00:14:49.076 fused_ordering(462) 00:14:49.076 fused_ordering(463) 00:14:49.076 fused_ordering(464) 00:14:49.076 fused_ordering(465) 00:14:49.076 fused_ordering(466) 00:14:49.076 fused_ordering(467) 00:14:49.076 fused_ordering(468) 00:14:49.077 fused_ordering(469) 00:14:49.077 fused_ordering(470) 00:14:49.077 fused_ordering(471) 00:14:49.077 fused_ordering(472) 00:14:49.077 fused_ordering(473) 00:14:49.077 fused_ordering(474) 00:14:49.077 fused_ordering(475) 00:14:49.077 fused_ordering(476) 00:14:49.077 fused_ordering(477) 00:14:49.077 fused_ordering(478) 00:14:49.077 fused_ordering(479) 00:14:49.077 fused_ordering(480) 00:14:49.077 fused_ordering(481) 00:14:49.077 fused_ordering(482) 00:14:49.077 fused_ordering(483) 00:14:49.077 fused_ordering(484) 00:14:49.077 fused_ordering(485) 00:14:49.077 fused_ordering(486) 00:14:49.077 fused_ordering(487) 00:14:49.077 fused_ordering(488) 00:14:49.077 fused_ordering(489) 00:14:49.077 fused_ordering(490) 00:14:49.077 fused_ordering(491) 00:14:49.077 fused_ordering(492) 00:14:49.077 fused_ordering(493) 00:14:49.077 fused_ordering(494) 00:14:49.077 fused_ordering(495) 00:14:49.077 fused_ordering(496) 00:14:49.077 fused_ordering(497) 00:14:49.077 fused_ordering(498) 00:14:49.077 fused_ordering(499) 00:14:49.077 fused_ordering(500) 00:14:49.077 fused_ordering(501) 00:14:49.077 fused_ordering(502) 00:14:49.077 fused_ordering(503) 00:14:49.077 fused_ordering(504) 00:14:49.077 fused_ordering(505) 00:14:49.077 fused_ordering(506) 00:14:49.077 fused_ordering(507) 00:14:49.077 fused_ordering(508) 00:14:49.077 fused_ordering(509) 00:14:49.077 fused_ordering(510) 00:14:49.077 fused_ordering(511) 00:14:49.077 fused_ordering(512) 00:14:49.077 fused_ordering(513) 00:14:49.077 fused_ordering(514) 00:14:49.077 fused_ordering(515) 00:14:49.077 fused_ordering(516) 00:14:49.077 fused_ordering(517) 00:14:49.077 fused_ordering(518) 00:14:49.077 fused_ordering(519) 00:14:49.077 fused_ordering(520) 00:14:49.077 fused_ordering(521) 00:14:49.077 fused_ordering(522) 00:14:49.077 fused_ordering(523) 00:14:49.077 fused_ordering(524) 00:14:49.077 fused_ordering(525) 00:14:49.077 fused_ordering(526) 00:14:49.077 fused_ordering(527) 00:14:49.077 fused_ordering(528) 00:14:49.077 fused_ordering(529) 00:14:49.077 fused_ordering(530) 00:14:49.077 fused_ordering(531) 00:14:49.077 fused_ordering(532) 00:14:49.077 fused_ordering(533) 00:14:49.077 fused_ordering(534) 00:14:49.077 fused_ordering(535) 00:14:49.077 fused_ordering(536) 00:14:49.077 fused_ordering(537) 00:14:49.077 fused_ordering(538) 00:14:49.077 fused_ordering(539) 00:14:49.077 fused_ordering(540) 00:14:49.077 fused_ordering(541) 00:14:49.077 fused_ordering(542) 00:14:49.077 fused_ordering(543) 00:14:49.077 fused_ordering(544) 00:14:49.077 fused_ordering(545) 00:14:49.077 fused_ordering(546) 00:14:49.077 fused_ordering(547) 00:14:49.077 fused_ordering(548) 00:14:49.077 fused_ordering(549) 00:14:49.077 fused_ordering(550) 00:14:49.077 fused_ordering(551) 00:14:49.077 fused_ordering(552) 00:14:49.077 fused_ordering(553) 00:14:49.077 fused_ordering(554) 00:14:49.077 fused_ordering(555) 00:14:49.077 fused_ordering(556) 00:14:49.077 fused_ordering(557) 00:14:49.077 fused_ordering(558) 00:14:49.077 fused_ordering(559) 00:14:49.077 fused_ordering(560) 00:14:49.077 fused_ordering(561) 00:14:49.077 fused_ordering(562) 00:14:49.077 fused_ordering(563) 00:14:49.077 fused_ordering(564) 00:14:49.077 fused_ordering(565) 00:14:49.077 fused_ordering(566) 00:14:49.077 fused_ordering(567) 00:14:49.077 fused_ordering(568) 00:14:49.077 fused_ordering(569) 00:14:49.077 fused_ordering(570) 00:14:49.077 fused_ordering(571) 00:14:49.077 fused_ordering(572) 00:14:49.077 fused_ordering(573) 00:14:49.077 fused_ordering(574) 00:14:49.077 fused_ordering(575) 00:14:49.077 fused_ordering(576) 00:14:49.077 fused_ordering(577) 00:14:49.077 fused_ordering(578) 00:14:49.077 fused_ordering(579) 00:14:49.077 fused_ordering(580) 00:14:49.077 fused_ordering(581) 00:14:49.077 fused_ordering(582) 00:14:49.077 fused_ordering(583) 00:14:49.077 fused_ordering(584) 00:14:49.077 fused_ordering(585) 00:14:49.077 fused_ordering(586) 00:14:49.077 fused_ordering(587) 00:14:49.077 fused_ordering(588) 00:14:49.077 fused_ordering(589) 00:14:49.077 fused_ordering(590) 00:14:49.077 fused_ordering(591) 00:14:49.077 fused_ordering(592) 00:14:49.077 fused_ordering(593) 00:14:49.077 fused_ordering(594) 00:14:49.077 fused_ordering(595) 00:14:49.077 fused_ordering(596) 00:14:49.077 fused_ordering(597) 00:14:49.077 fused_ordering(598) 00:14:49.077 fused_ordering(599) 00:14:49.077 fused_ordering(600) 00:14:49.077 fused_ordering(601) 00:14:49.077 fused_ordering(602) 00:14:49.077 fused_ordering(603) 00:14:49.077 fused_ordering(604) 00:14:49.077 fused_ordering(605) 00:14:49.077 fused_ordering(606) 00:14:49.077 fused_ordering(607) 00:14:49.077 fused_ordering(608) 00:14:49.077 fused_ordering(609) 00:14:49.077 fused_ordering(610) 00:14:49.077 fused_ordering(611) 00:14:49.077 fused_ordering(612) 00:14:49.077 fused_ordering(613) 00:14:49.077 fused_ordering(614) 00:14:49.077 fused_ordering(615) 00:14:49.647 fused_ordering(616) 00:14:49.647 fused_ordering(617) 00:14:49.647 fused_ordering(618) 00:14:49.647 fused_ordering(619) 00:14:49.647 fused_ordering(620) 00:14:49.647 fused_ordering(621) 00:14:49.647 fused_ordering(622) 00:14:49.647 fused_ordering(623) 00:14:49.647 fused_ordering(624) 00:14:49.647 fused_ordering(625) 00:14:49.647 fused_ordering(626) 00:14:49.647 fused_ordering(627) 00:14:49.647 fused_ordering(628) 00:14:49.647 fused_ordering(629) 00:14:49.647 fused_ordering(630) 00:14:49.647 fused_ordering(631) 00:14:49.647 fused_ordering(632) 00:14:49.647 fused_ordering(633) 00:14:49.647 fused_ordering(634) 00:14:49.647 fused_ordering(635) 00:14:49.647 fused_ordering(636) 00:14:49.647 fused_ordering(637) 00:14:49.647 fused_ordering(638) 00:14:49.647 fused_ordering(639) 00:14:49.647 fused_ordering(640) 00:14:49.647 fused_ordering(641) 00:14:49.647 fused_ordering(642) 00:14:49.647 fused_ordering(643) 00:14:49.647 fused_ordering(644) 00:14:49.647 fused_ordering(645) 00:14:49.647 fused_ordering(646) 00:14:49.647 fused_ordering(647) 00:14:49.647 fused_ordering(648) 00:14:49.647 fused_ordering(649) 00:14:49.647 fused_ordering(650) 00:14:49.647 fused_ordering(651) 00:14:49.647 fused_ordering(652) 00:14:49.647 fused_ordering(653) 00:14:49.647 fused_ordering(654) 00:14:49.647 fused_ordering(655) 00:14:49.647 fused_ordering(656) 00:14:49.647 fused_ordering(657) 00:14:49.647 fused_ordering(658) 00:14:49.647 fused_ordering(659) 00:14:49.647 fused_ordering(660) 00:14:49.647 fused_ordering(661) 00:14:49.647 fused_ordering(662) 00:14:49.647 fused_ordering(663) 00:14:49.647 fused_ordering(664) 00:14:49.647 fused_ordering(665) 00:14:49.647 fused_ordering(666) 00:14:49.647 fused_ordering(667) 00:14:49.647 fused_ordering(668) 00:14:49.647 fused_ordering(669) 00:14:49.647 fused_ordering(670) 00:14:49.647 fused_ordering(671) 00:14:49.647 fused_ordering(672) 00:14:49.647 fused_ordering(673) 00:14:49.647 fused_ordering(674) 00:14:49.647 fused_ordering(675) 00:14:49.647 fused_ordering(676) 00:14:49.647 fused_ordering(677) 00:14:49.647 fused_ordering(678) 00:14:49.647 fused_ordering(679) 00:14:49.647 fused_ordering(680) 00:14:49.647 fused_ordering(681) 00:14:49.647 fused_ordering(682) 00:14:49.647 fused_ordering(683) 00:14:49.647 fused_ordering(684) 00:14:49.647 fused_ordering(685) 00:14:49.647 fused_ordering(686) 00:14:49.647 fused_ordering(687) 00:14:49.647 fused_ordering(688) 00:14:49.647 fused_ordering(689) 00:14:49.647 fused_ordering(690) 00:14:49.648 fused_ordering(691) 00:14:49.648 fused_ordering(692) 00:14:49.648 fused_ordering(693) 00:14:49.648 fused_ordering(694) 00:14:49.648 fused_ordering(695) 00:14:49.648 fused_ordering(696) 00:14:49.648 fused_ordering(697) 00:14:49.648 fused_ordering(698) 00:14:49.648 fused_ordering(699) 00:14:49.648 fused_ordering(700) 00:14:49.648 fused_ordering(701) 00:14:49.648 fused_ordering(702) 00:14:49.648 fused_ordering(703) 00:14:49.648 fused_ordering(704) 00:14:49.648 fused_ordering(705) 00:14:49.648 fused_ordering(706) 00:14:49.648 fused_ordering(707) 00:14:49.648 fused_ordering(708) 00:14:49.648 fused_ordering(709) 00:14:49.648 fused_ordering(710) 00:14:49.648 fused_ordering(711) 00:14:49.648 fused_ordering(712) 00:14:49.648 fused_ordering(713) 00:14:49.648 fused_ordering(714) 00:14:49.648 fused_ordering(715) 00:14:49.648 fused_ordering(716) 00:14:49.648 fused_ordering(717) 00:14:49.648 fused_ordering(718) 00:14:49.648 fused_ordering(719) 00:14:49.648 fused_ordering(720) 00:14:49.648 fused_ordering(721) 00:14:49.648 fused_ordering(722) 00:14:49.648 fused_ordering(723) 00:14:49.648 fused_ordering(724) 00:14:49.648 fused_ordering(725) 00:14:49.648 fused_ordering(726) 00:14:49.648 fused_ordering(727) 00:14:49.648 fused_ordering(728) 00:14:49.648 fused_ordering(729) 00:14:49.648 fused_ordering(730) 00:14:49.648 fused_ordering(731) 00:14:49.648 fused_ordering(732) 00:14:49.648 fused_ordering(733) 00:14:49.648 fused_ordering(734) 00:14:49.648 fused_ordering(735) 00:14:49.648 fused_ordering(736) 00:14:49.648 fused_ordering(737) 00:14:49.648 fused_ordering(738) 00:14:49.648 fused_ordering(739) 00:14:49.648 fused_ordering(740) 00:14:49.648 fused_ordering(741) 00:14:49.648 fused_ordering(742) 00:14:49.648 fused_ordering(743) 00:14:49.648 fused_ordering(744) 00:14:49.648 fused_ordering(745) 00:14:49.648 fused_ordering(746) 00:14:49.648 fused_ordering(747) 00:14:49.648 fused_ordering(748) 00:14:49.648 fused_ordering(749) 00:14:49.648 fused_ordering(750) 00:14:49.648 fused_ordering(751) 00:14:49.648 fused_ordering(752) 00:14:49.648 fused_ordering(753) 00:14:49.648 fused_ordering(754) 00:14:49.648 fused_ordering(755) 00:14:49.648 fused_ordering(756) 00:14:49.648 fused_ordering(757) 00:14:49.648 fused_ordering(758) 00:14:49.648 fused_ordering(759) 00:14:49.648 fused_ordering(760) 00:14:49.648 fused_ordering(761) 00:14:49.648 fused_ordering(762) 00:14:49.648 fused_ordering(763) 00:14:49.648 fused_ordering(764) 00:14:49.648 fused_ordering(765) 00:14:49.648 fused_ordering(766) 00:14:49.648 fused_ordering(767) 00:14:49.648 fused_ordering(768) 00:14:49.648 fused_ordering(769) 00:14:49.648 fused_ordering(770) 00:14:49.648 fused_ordering(771) 00:14:49.648 fused_ordering(772) 00:14:49.648 fused_ordering(773) 00:14:49.648 fused_ordering(774) 00:14:49.648 fused_ordering(775) 00:14:49.648 fused_ordering(776) 00:14:49.648 fused_ordering(777) 00:14:49.648 fused_ordering(778) 00:14:49.648 fused_ordering(779) 00:14:49.648 fused_ordering(780) 00:14:49.648 fused_ordering(781) 00:14:49.648 fused_ordering(782) 00:14:49.648 fused_ordering(783) 00:14:49.648 fused_ordering(784) 00:14:49.648 fused_ordering(785) 00:14:49.648 fused_ordering(786) 00:14:49.648 fused_ordering(787) 00:14:49.648 fused_ordering(788) 00:14:49.648 fused_ordering(789) 00:14:49.648 fused_ordering(790) 00:14:49.648 fused_ordering(791) 00:14:49.648 fused_ordering(792) 00:14:49.648 fused_ordering(793) 00:14:49.648 fused_ordering(794) 00:14:49.648 fused_ordering(795) 00:14:49.648 fused_ordering(796) 00:14:49.648 fused_ordering(797) 00:14:49.648 fused_ordering(798) 00:14:49.648 fused_ordering(799) 00:14:49.648 fused_ordering(800) 00:14:49.648 fused_ordering(801) 00:14:49.648 fused_ordering(802) 00:14:49.648 fused_ordering(803) 00:14:49.648 fused_ordering(804) 00:14:49.648 fused_ordering(805) 00:14:49.648 fused_ordering(806) 00:14:49.648 fused_ordering(807) 00:14:49.648 fused_ordering(808) 00:14:49.648 fused_ordering(809) 00:14:49.648 fused_ordering(810) 00:14:49.648 fused_ordering(811) 00:14:49.648 fused_ordering(812) 00:14:49.648 fused_ordering(813) 00:14:49.648 fused_ordering(814) 00:14:49.648 fused_ordering(815) 00:14:49.648 fused_ordering(816) 00:14:49.648 fused_ordering(817) 00:14:49.648 fused_ordering(818) 00:14:49.648 fused_ordering(819) 00:14:49.648 fused_ordering(820) 00:14:50.218 fused_ordering(821) 00:14:50.218 fused_ordering(822) 00:14:50.218 fused_ordering(823) 00:14:50.218 fused_ordering(824) 00:14:50.218 fused_ordering(825) 00:14:50.218 fused_ordering(826) 00:14:50.218 fused_ordering(827) 00:14:50.218 fused_ordering(828) 00:14:50.218 fused_ordering(829) 00:14:50.218 fused_ordering(830) 00:14:50.218 fused_ordering(831) 00:14:50.218 fused_ordering(832) 00:14:50.218 fused_ordering(833) 00:14:50.218 fused_ordering(834) 00:14:50.218 fused_ordering(835) 00:14:50.218 fused_ordering(836) 00:14:50.218 fused_ordering(837) 00:14:50.218 fused_ordering(838) 00:14:50.218 fused_ordering(839) 00:14:50.218 fused_ordering(840) 00:14:50.218 fused_ordering(841) 00:14:50.218 fused_ordering(842) 00:14:50.218 fused_ordering(843) 00:14:50.218 fused_ordering(844) 00:14:50.218 fused_ordering(845) 00:14:50.218 fused_ordering(846) 00:14:50.218 fused_ordering(847) 00:14:50.218 fused_ordering(848) 00:14:50.218 fused_ordering(849) 00:14:50.218 fused_ordering(850) 00:14:50.218 fused_ordering(851) 00:14:50.218 fused_ordering(852) 00:14:50.218 fused_ordering(853) 00:14:50.218 fused_ordering(854) 00:14:50.218 fused_ordering(855) 00:14:50.218 fused_ordering(856) 00:14:50.218 fused_ordering(857) 00:14:50.218 fused_ordering(858) 00:14:50.218 fused_ordering(859) 00:14:50.218 fused_ordering(860) 00:14:50.218 fused_ordering(861) 00:14:50.218 fused_ordering(862) 00:14:50.218 fused_ordering(863) 00:14:50.218 fused_ordering(864) 00:14:50.218 fused_ordering(865) 00:14:50.218 fused_ordering(866) 00:14:50.218 fused_ordering(867) 00:14:50.218 fused_ordering(868) 00:14:50.218 fused_ordering(869) 00:14:50.218 fused_ordering(870) 00:14:50.218 fused_ordering(871) 00:14:50.218 fused_ordering(872) 00:14:50.218 fused_ordering(873) 00:14:50.218 fused_ordering(874) 00:14:50.218 fused_ordering(875) 00:14:50.218 fused_ordering(876) 00:14:50.218 fused_ordering(877) 00:14:50.218 fused_ordering(878) 00:14:50.218 fused_ordering(879) 00:14:50.218 fused_ordering(880) 00:14:50.218 fused_ordering(881) 00:14:50.218 fused_ordering(882) 00:14:50.218 fused_ordering(883) 00:14:50.218 fused_ordering(884) 00:14:50.218 fused_ordering(885) 00:14:50.218 fused_ordering(886) 00:14:50.218 fused_ordering(887) 00:14:50.218 fused_ordering(888) 00:14:50.218 fused_ordering(889) 00:14:50.218 fused_ordering(890) 00:14:50.218 fused_ordering(891) 00:14:50.218 fused_ordering(892) 00:14:50.218 fused_ordering(893) 00:14:50.218 fused_ordering(894) 00:14:50.218 fused_ordering(895) 00:14:50.218 fused_ordering(896) 00:14:50.218 fused_ordering(897) 00:14:50.218 fused_ordering(898) 00:14:50.218 fused_ordering(899) 00:14:50.218 fused_ordering(900) 00:14:50.218 fused_ordering(901) 00:14:50.218 fused_ordering(902) 00:14:50.218 fused_ordering(903) 00:14:50.218 fused_ordering(904) 00:14:50.218 fused_ordering(905) 00:14:50.218 fused_ordering(906) 00:14:50.218 fused_ordering(907) 00:14:50.218 fused_ordering(908) 00:14:50.218 fused_ordering(909) 00:14:50.218 fused_ordering(910) 00:14:50.218 fused_ordering(911) 00:14:50.218 fused_ordering(912) 00:14:50.218 fused_ordering(913) 00:14:50.218 fused_ordering(914) 00:14:50.218 fused_ordering(915) 00:14:50.218 fused_ordering(916) 00:14:50.218 fused_ordering(917) 00:14:50.218 fused_ordering(918) 00:14:50.218 fused_ordering(919) 00:14:50.218 fused_ordering(920) 00:14:50.218 fused_ordering(921) 00:14:50.218 fused_ordering(922) 00:14:50.219 fused_ordering(923) 00:14:50.219 fused_ordering(924) 00:14:50.219 fused_ordering(925) 00:14:50.219 fused_ordering(926) 00:14:50.219 fused_ordering(927) 00:14:50.219 fused_ordering(928) 00:14:50.219 fused_ordering(929) 00:14:50.219 fused_ordering(930) 00:14:50.219 fused_ordering(931) 00:14:50.219 fused_ordering(932) 00:14:50.219 fused_ordering(933) 00:14:50.219 fused_ordering(934) 00:14:50.219 fused_ordering(935) 00:14:50.219 fused_ordering(936) 00:14:50.219 fused_ordering(937) 00:14:50.219 fused_ordering(938) 00:14:50.219 fused_ordering(939) 00:14:50.219 fused_ordering(940) 00:14:50.219 fused_ordering(941) 00:14:50.219 fused_ordering(942) 00:14:50.219 fused_ordering(943) 00:14:50.219 fused_ordering(944) 00:14:50.219 fused_ordering(945) 00:14:50.219 fused_ordering(946) 00:14:50.219 fused_ordering(947) 00:14:50.219 fused_ordering(948) 00:14:50.219 fused_ordering(949) 00:14:50.219 fused_ordering(950) 00:14:50.219 fused_ordering(951) 00:14:50.219 fused_ordering(952) 00:14:50.219 fused_ordering(953) 00:14:50.219 fused_ordering(954) 00:14:50.219 fused_ordering(955) 00:14:50.219 fused_ordering(956) 00:14:50.219 fused_ordering(957) 00:14:50.219 fused_ordering(958) 00:14:50.219 fused_ordering(959) 00:14:50.219 fused_ordering(960) 00:14:50.219 fused_ordering(961) 00:14:50.219 fused_ordering(962) 00:14:50.219 fused_ordering(963) 00:14:50.219 fused_ordering(964) 00:14:50.219 fused_ordering(965) 00:14:50.219 fused_ordering(966) 00:14:50.219 fused_ordering(967) 00:14:50.219 fused_ordering(968) 00:14:50.219 fused_ordering(969) 00:14:50.219 fused_ordering(970) 00:14:50.219 fused_ordering(971) 00:14:50.219 fused_ordering(972) 00:14:50.219 fused_ordering(973) 00:14:50.219 fused_ordering(974) 00:14:50.219 fused_ordering(975) 00:14:50.219 fused_ordering(976) 00:14:50.219 fused_ordering(977) 00:14:50.219 fused_ordering(978) 00:14:50.219 fused_ordering(979) 00:14:50.219 fused_ordering(980) 00:14:50.219 fused_ordering(981) 00:14:50.219 fused_ordering(982) 00:14:50.219 fused_ordering(983) 00:14:50.219 fused_ordering(984) 00:14:50.219 fused_ordering(985) 00:14:50.219 fused_ordering(986) 00:14:50.219 fused_ordering(987) 00:14:50.219 fused_ordering(988) 00:14:50.219 fused_ordering(989) 00:14:50.219 fused_ordering(990) 00:14:50.219 fused_ordering(991) 00:14:50.219 fused_ordering(992) 00:14:50.219 fused_ordering(993) 00:14:50.219 fused_ordering(994) 00:14:50.219 fused_ordering(995) 00:14:50.219 fused_ordering(996) 00:14:50.219 fused_ordering(997) 00:14:50.219 fused_ordering(998) 00:14:50.219 fused_ordering(999) 00:14:50.219 fused_ordering(1000) 00:14:50.219 fused_ordering(1001) 00:14:50.219 fused_ordering(1002) 00:14:50.219 fused_ordering(1003) 00:14:50.219 fused_ordering(1004) 00:14:50.219 fused_ordering(1005) 00:14:50.219 fused_ordering(1006) 00:14:50.219 fused_ordering(1007) 00:14:50.219 fused_ordering(1008) 00:14:50.219 fused_ordering(1009) 00:14:50.219 fused_ordering(1010) 00:14:50.219 fused_ordering(1011) 00:14:50.219 fused_ordering(1012) 00:14:50.219 fused_ordering(1013) 00:14:50.219 fused_ordering(1014) 00:14:50.219 fused_ordering(1015) 00:14:50.219 fused_ordering(1016) 00:14:50.219 fused_ordering(1017) 00:14:50.219 fused_ordering(1018) 00:14:50.219 fused_ordering(1019) 00:14:50.219 fused_ordering(1020) 00:14:50.219 fused_ordering(1021) 00:14:50.219 fused_ordering(1022) 00:14:50.219 fused_ordering(1023) 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:50.219 rmmod nvme_tcp 00:14:50.219 rmmod nvme_fabrics 00:14:50.219 rmmod nvme_keyring 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 565174 ']' 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 565174 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 565174 ']' 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 565174 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.219 12:49:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 565174 00:14:50.219 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:50.219 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:50.219 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 565174' 00:14:50.219 killing process with pid 565174 00:14:50.219 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 565174 00:14:50.219 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 565174 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.480 12:49:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.496 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:52.496 00:14:52.496 real 0m14.202s 00:14:52.496 user 0m7.291s 00:14:52.496 sys 0m7.594s 00:14:52.496 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.496 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.496 ************************************ 00:14:52.496 END TEST nvmf_fused_ordering 00:14:52.496 ************************************ 00:14:52.496 12:49:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:52.496 12:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:52.496 12:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.496 12:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.496 ************************************ 00:14:52.496 START TEST nvmf_ns_masking 00:14:52.496 ************************************ 00:14:52.496 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:52.759 * Looking for test storage... 00:14:52.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:52.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.759 --rc genhtml_branch_coverage=1 00:14:52.759 --rc genhtml_function_coverage=1 00:14:52.759 --rc genhtml_legend=1 00:14:52.759 --rc geninfo_all_blocks=1 00:14:52.759 --rc geninfo_unexecuted_blocks=1 00:14:52.759 00:14:52.759 ' 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:52.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.759 --rc genhtml_branch_coverage=1 00:14:52.759 --rc genhtml_function_coverage=1 00:14:52.759 --rc genhtml_legend=1 00:14:52.759 --rc geninfo_all_blocks=1 00:14:52.759 --rc geninfo_unexecuted_blocks=1 00:14:52.759 00:14:52.759 ' 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:52.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.759 --rc genhtml_branch_coverage=1 00:14:52.759 --rc genhtml_function_coverage=1 00:14:52.759 --rc genhtml_legend=1 00:14:52.759 --rc geninfo_all_blocks=1 00:14:52.759 --rc geninfo_unexecuted_blocks=1 00:14:52.759 00:14:52.759 ' 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:52.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.759 --rc genhtml_branch_coverage=1 00:14:52.759 --rc genhtml_function_coverage=1 00:14:52.759 --rc genhtml_legend=1 00:14:52.759 --rc geninfo_all_blocks=1 00:14:52.759 --rc geninfo_unexecuted_blocks=1 00:14:52.759 00:14:52.759 ' 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.759 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=feb389ce-c7d7-4679-a2c6-f3886d78da58 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c6aea9d4-df8a-4247-b663-933916eb26da 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=58c4ef96-ea5f-4b39-a9fc-5ff2527f3b9c 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:52.760 12:49:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:00.907 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:00.907 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:00.907 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:00.908 Found net devices under 0000:31:00.0: cvl_0_0 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:00.908 Found net devices under 0000:31:00.1: cvl_0_1 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.908 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.169 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.169 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.169 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:01.169 12:49:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.169 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.169 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.169 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:01.169 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:01.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:15:01.169 00:15:01.169 --- 10.0.0.2 ping statistics --- 00:15:01.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.169 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:15:01.169 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:15:01.169 00:15:01.169 --- 10.0.0.1 ping statistics --- 00:15:01.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.169 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:15:01.169 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=570560 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 570560 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 570560 ']' 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.430 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:01.430 [2024-11-25 12:49:41.175203] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:15:01.430 [2024-11-25 12:49:41.175270] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.430 [2024-11-25 12:49:41.266892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.430 [2024-11-25 12:49:41.306479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.430 [2024-11-25 12:49:41.306512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.430 [2024-11-25 12:49:41.306520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.430 [2024-11-25 12:49:41.306527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.430 [2024-11-25 12:49:41.306532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.430 [2024-11-25 12:49:41.307128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.371 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.371 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:02.371 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:02.371 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:02.371 12:49:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.371 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.372 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:02.372 [2024-11-25 12:49:42.160784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.372 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:02.372 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:02.372 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:02.632 Malloc1 00:15:02.632 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:02.632 Malloc2 00:15:02.894 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:02.894 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:03.155 12:49:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.155 [2024-11-25 12:49:43.013376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.155 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:03.155 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 58c4ef96-ea5f-4b39-a9fc-5ff2527f3b9c -a 10.0.0.2 -s 4420 -i 4 00:15:03.416 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:03.416 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:03.416 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.416 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:03.416 12:49:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:05.327 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:05.327 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:05.327 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.327 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:05.327 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.327 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:05.327 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:05.327 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:05.587 [ 0]:0x1 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c6f634f4af4344fdbd9ff1d14b799da5 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c6f634f4af4344fdbd9ff1d14b799da5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:05.587 [ 0]:0x1 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:05.587 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:05.847 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c6f634f4af4344fdbd9ff1d14b799da5 00:15:05.847 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c6f634f4af4344fdbd9ff1d14b799da5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.847 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:05.847 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:05.847 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:05.847 [ 1]:0x2 00:15:05.847 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:05.847 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:05.847 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98d73ca425a64c61b1fee078cc90ab5f 00:15:05.847 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98d73ca425a64c61b1fee078cc90ab5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.847 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:05.847 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:06.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.108 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.108 12:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:06.368 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:06.368 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 58c4ef96-ea5f-4b39-a9fc-5ff2527f3b9c -a 10.0.0.2 -s 4420 -i 4 00:15:06.629 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:06.629 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:06.629 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.629 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:06.629 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:06.629 12:49:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:08.542 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:08.542 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:08.542 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.542 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:08.542 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.542 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:08.542 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:08.542 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:08.804 [ 0]:0x2 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98d73ca425a64c61b1fee078cc90ab5f 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98d73ca425a64c61b1fee078cc90ab5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.804 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:09.066 [ 0]:0x1 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c6f634f4af4344fdbd9ff1d14b799da5 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c6f634f4af4344fdbd9ff1d14b799da5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:09.066 [ 1]:0x2 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98d73ca425a64c61b1fee078cc90ab5f 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98d73ca425a64c61b1fee078cc90ab5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.066 12:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:09.327 [ 0]:0x2 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:09.327 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.587 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98d73ca425a64c61b1fee078cc90ab5f 00:15:09.587 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98d73ca425a64c61b1fee078cc90ab5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.587 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:09.587 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.587 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:09.587 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:09.587 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 58c4ef96-ea5f-4b39-a9fc-5ff2527f3b9c -a 10.0.0.2 -s 4420 -i 4 00:15:09.847 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:09.847 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:09.847 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.847 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:09.847 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:09.847 12:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:12.391 [ 0]:0x1 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c6f634f4af4344fdbd9ff1d14b799da5 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c6f634f4af4344fdbd9ff1d14b799da5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:12.391 [ 1]:0x2 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98d73ca425a64c61b1fee078cc90ab5f 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98d73ca425a64c61b1fee078cc90ab5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.391 12:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:12.391 [ 0]:0x2 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98d73ca425a64c61b1fee078cc90ab5f 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98d73ca425a64c61b1fee078cc90ab5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.391 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.392 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.392 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.392 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.392 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:12.392 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:12.392 [2024-11-25 12:49:52.283855] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:12.392 request: 00:15:12.392 { 00:15:12.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.392 "nsid": 2, 00:15:12.392 "host": "nqn.2016-06.io.spdk:host1", 00:15:12.392 "method": "nvmf_ns_remove_host", 00:15:12.392 "req_id": 1 00:15:12.392 } 00:15:12.392 Got JSON-RPC error response 00:15:12.392 response: 00:15:12.392 { 00:15:12.392 "code": -32602, 00:15:12.392 "message": "Invalid parameters" 00:15:12.392 } 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:12.653 [ 0]:0x2 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98d73ca425a64c61b1fee078cc90ab5f 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98d73ca425a64c61b1fee078cc90ab5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=573034 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 573034 /var/tmp/host.sock 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 573034 ']' 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:12.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.653 12:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:12.913 [2024-11-25 12:49:52.560192] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:15:12.913 [2024-11-25 12:49:52.560247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573034 ] 00:15:12.913 [2024-11-25 12:49:52.655404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.913 [2024-11-25 12:49:52.691168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.856 12:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.856 12:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:13.856 12:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.856 12:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:13.856 12:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid feb389ce-c7d7-4679-a2c6-f3886d78da58 00:15:13.856 12:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:13.856 12:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FEB389CEC7D74679A2C6F3886D78DA58 -i 00:15:14.117 12:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c6aea9d4-df8a-4247-b663-933916eb26da 00:15:14.117 12:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:14.117 12:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C6AEA9D4DF8A4247B663933916EB26DA -i 00:15:14.378 12:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.378 12:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:14.639 12:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:14.639 12:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:14.900 nvme0n1 00:15:14.900 12:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:14.900 12:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:15.161 nvme1n2 00:15:15.161 12:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:15.161 12:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:15.161 12:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:15.161 12:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:15.161 12:49:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:15.423 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:15.423 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:15.423 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:15.423 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:15.423 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ feb389ce-c7d7-4679-a2c6-f3886d78da58 == \f\e\b\3\8\9\c\e\-\c\7\d\7\-\4\6\7\9\-\a\2\c\6\-\f\3\8\8\6\d\7\8\d\a\5\8 ]] 00:15:15.684 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:15.684 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:15.684 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:15.684 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c6aea9d4-df8a-4247-b663-933916eb26da == \c\6\a\e\a\9\d\4\-\d\f\8\a\-\4\2\4\7\-\b\6\6\3\-\9\3\3\9\1\6\e\b\2\6\d\a ]] 00:15:15.684 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.944 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:15.944 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid feb389ce-c7d7-4679-a2c6-f3886d78da58 00:15:15.944 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:15.944 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FEB389CEC7D74679A2C6F3886D78DA58 00:15:15.944 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:15.944 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FEB389CEC7D74679A2C6F3886D78DA58 00:15:15.944 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.944 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.944 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.944 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.944 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.205 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.205 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.205 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:16.205 12:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g FEB389CEC7D74679A2C6F3886D78DA58 00:15:16.205 [2024-11-25 12:49:55.998149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:16.205 [2024-11-25 12:49:55.998181] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:16.205 [2024-11-25 12:49:55.998191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.205 request: 00:15:16.205 { 00:15:16.205 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.205 "namespace": { 00:15:16.205 "bdev_name": "invalid", 00:15:16.205 "nsid": 1, 00:15:16.205 "nguid": "FEB389CEC7D74679A2C6F3886D78DA58", 00:15:16.205 "no_auto_visible": false 00:15:16.205 }, 00:15:16.205 "method": "nvmf_subsystem_add_ns", 00:15:16.205 "req_id": 1 00:15:16.205 } 00:15:16.205 Got JSON-RPC error response 00:15:16.205 response: 00:15:16.205 { 00:15:16.205 "code": -32602, 00:15:16.205 "message": "Invalid parameters" 00:15:16.205 } 00:15:16.205 12:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:16.205 12:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:16.205 12:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:16.205 12:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:16.205 12:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid feb389ce-c7d7-4679-a2c6-f3886d78da58 00:15:16.205 12:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:16.205 12:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FEB389CEC7D74679A2C6F3886D78DA58 -i 00:15:16.466 12:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:18.376 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:18.376 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:18.376 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 573034 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 573034 ']' 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 573034 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 573034 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 573034' 00:15:18.636 killing process with pid 573034 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 573034 00:15:18.636 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 573034 00:15:18.896 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.896 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:18.896 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:18.896 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:18.896 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:19.157 rmmod nvme_tcp 00:15:19.157 rmmod nvme_fabrics 00:15:19.157 rmmod nvme_keyring 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 570560 ']' 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 570560 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 570560 ']' 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 570560 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 570560 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 570560' 00:15:19.157 killing process with pid 570560 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 570560 00:15:19.157 12:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 570560 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:19.419 12:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.337 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:21.337 00:15:21.337 real 0m28.840s 00:15:21.337 user 0m31.491s 00:15:21.337 sys 0m8.949s 00:15:21.337 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.337 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:21.337 ************************************ 00:15:21.337 END TEST nvmf_ns_masking 00:15:21.337 ************************************ 00:15:21.337 12:50:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:21.337 12:50:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:21.337 12:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:21.337 12:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.337 12:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.598 ************************************ 00:15:21.598 START TEST nvmf_nvme_cli 00:15:21.598 ************************************ 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:21.598 * Looking for test storage... 00:15:21.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:21.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.598 --rc genhtml_branch_coverage=1 00:15:21.598 --rc genhtml_function_coverage=1 00:15:21.598 --rc genhtml_legend=1 00:15:21.598 --rc geninfo_all_blocks=1 00:15:21.598 --rc geninfo_unexecuted_blocks=1 00:15:21.598 00:15:21.598 ' 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:21.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.598 --rc genhtml_branch_coverage=1 00:15:21.598 --rc genhtml_function_coverage=1 00:15:21.598 --rc genhtml_legend=1 00:15:21.598 --rc geninfo_all_blocks=1 00:15:21.598 --rc geninfo_unexecuted_blocks=1 00:15:21.598 00:15:21.598 ' 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:21.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.598 --rc genhtml_branch_coverage=1 00:15:21.598 --rc genhtml_function_coverage=1 00:15:21.598 --rc genhtml_legend=1 00:15:21.598 --rc geninfo_all_blocks=1 00:15:21.598 --rc geninfo_unexecuted_blocks=1 00:15:21.598 00:15:21.598 ' 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:21.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.598 --rc genhtml_branch_coverage=1 00:15:21.598 --rc genhtml_function_coverage=1 00:15:21.598 --rc genhtml_legend=1 00:15:21.598 --rc geninfo_all_blocks=1 00:15:21.598 --rc geninfo_unexecuted_blocks=1 00:15:21.598 00:15:21.598 ' 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.598 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:21.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:21.599 12:50:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:29.740 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:29.740 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:29.740 Found net devices under 0000:31:00.0: cvl_0_0 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:29.740 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:29.741 Found net devices under 0000:31:00.1: cvl_0_1 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:29.741 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:30.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:15:30.003 00:15:30.003 --- 10.0.0.2 ping statistics --- 00:15:30.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.003 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:15:30.003 00:15:30.003 --- 10.0.0.1 ping statistics --- 00:15:30.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.003 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=578953 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 578953 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 578953 ']' 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.003 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.004 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.004 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.004 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:30.004 12:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:30.004 [2024-11-25 12:50:09.876927] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:15:30.004 [2024-11-25 12:50:09.876996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.265 [2024-11-25 12:50:09.969469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.265 [2024-11-25 12:50:10.013280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.265 [2024-11-25 12:50:10.013319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.265 [2024-11-25 12:50:10.013327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.265 [2024-11-25 12:50:10.013334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.265 [2024-11-25 12:50:10.013340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.265 [2024-11-25 12:50:10.015054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.265 [2024-11-25 12:50:10.015201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.265 [2024-11-25 12:50:10.015363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.265 [2024-11-25 12:50:10.015363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:30.835 [2024-11-25 12:50:10.726747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.835 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.097 Malloc0 00:15:31.097 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.097 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:31.097 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.097 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.097 Malloc1 00:15:31.097 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.097 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:31.097 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 [2024-11-25 12:50:10.827651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.098 12:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:31.359 00:15:31.359 Discovery Log Number of Records 2, Generation counter 2 00:15:31.359 =====Discovery Log Entry 0====== 00:15:31.359 trtype: tcp 00:15:31.359 adrfam: ipv4 00:15:31.359 subtype: current discovery subsystem 00:15:31.359 treq: not required 00:15:31.359 portid: 0 00:15:31.359 trsvcid: 4420 00:15:31.359 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:31.359 traddr: 10.0.0.2 00:15:31.359 eflags: explicit discovery connections, duplicate discovery information 00:15:31.359 sectype: none 00:15:31.359 =====Discovery Log Entry 1====== 00:15:31.359 trtype: tcp 00:15:31.359 adrfam: ipv4 00:15:31.359 subtype: nvme subsystem 00:15:31.359 treq: not required 00:15:31.359 portid: 0 00:15:31.359 trsvcid: 4420 00:15:31.359 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:31.359 traddr: 10.0.0.2 00:15:31.359 eflags: none 00:15:31.359 sectype: none 00:15:31.359 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:31.359 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:31.359 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:31.359 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:31.359 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:31.359 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:31.359 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:31.359 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:31.359 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:31.359 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:31.359 12:50:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:32.743 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:32.743 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:32.743 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:32.743 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:32.743 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:32.743 12:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:34.652 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:34.652 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:34.652 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:34.652 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:34.652 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:34.652 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:34.652 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:34.652 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:34.652 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:34.652 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:34.913 /dev/nvme0n2 ]] 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:34.913 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:35.173 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:35.173 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.173 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:35.173 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.173 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:35.173 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:35.173 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.174 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:35.174 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:35.174 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.174 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:35.174 12:50:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:35.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:35.434 rmmod nvme_tcp 00:15:35.434 rmmod nvme_fabrics 00:15:35.434 rmmod nvme_keyring 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 578953 ']' 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 578953 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 578953 ']' 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 578953 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 578953 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.434 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.694 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 578953' 00:15:35.694 killing process with pid 578953 00:15:35.694 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 578953 00:15:35.694 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 578953 00:15:35.694 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:35.694 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:35.694 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:35.694 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:35.694 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:35.694 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:35.694 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:35.694 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:35.695 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:35.695 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.695 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.695 12:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.656 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:37.917 00:15:37.917 real 0m16.311s 00:15:37.917 user 0m24.265s 00:15:37.917 sys 0m6.913s 00:15:37.917 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.917 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:37.917 ************************************ 00:15:37.917 END TEST nvmf_nvme_cli 00:15:37.917 ************************************ 00:15:37.917 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:37.917 12:50:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:37.917 12:50:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:37.917 12:50:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.917 12:50:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.917 ************************************ 00:15:37.917 START TEST nvmf_vfio_user 00:15:37.917 ************************************ 00:15:37.917 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:37.917 * Looking for test storage... 00:15:37.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.917 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:37.917 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:37.917 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:38.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.178 --rc genhtml_branch_coverage=1 00:15:38.178 --rc genhtml_function_coverage=1 00:15:38.178 --rc genhtml_legend=1 00:15:38.178 --rc geninfo_all_blocks=1 00:15:38.178 --rc geninfo_unexecuted_blocks=1 00:15:38.178 00:15:38.178 ' 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:38.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.178 --rc genhtml_branch_coverage=1 00:15:38.178 --rc genhtml_function_coverage=1 00:15:38.178 --rc genhtml_legend=1 00:15:38.178 --rc geninfo_all_blocks=1 00:15:38.178 --rc geninfo_unexecuted_blocks=1 00:15:38.178 00:15:38.178 ' 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:38.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.178 --rc genhtml_branch_coverage=1 00:15:38.178 --rc genhtml_function_coverage=1 00:15:38.178 --rc genhtml_legend=1 00:15:38.178 --rc geninfo_all_blocks=1 00:15:38.178 --rc geninfo_unexecuted_blocks=1 00:15:38.178 00:15:38.178 ' 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:38.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.178 --rc genhtml_branch_coverage=1 00:15:38.178 --rc genhtml_function_coverage=1 00:15:38.178 --rc genhtml_legend=1 00:15:38.178 --rc geninfo_all_blocks=1 00:15:38.178 --rc geninfo_unexecuted_blocks=1 00:15:38.178 00:15:38.178 ' 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:38.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:38.178 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=580633 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 580633' 00:15:38.179 Process pid: 580633 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 580633 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 580633 ']' 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.179 12:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:38.179 [2024-11-25 12:50:17.929552] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:15:38.179 [2024-11-25 12:50:17.929600] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.179 [2024-11-25 12:50:18.009056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.179 [2024-11-25 12:50:18.045008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.179 [2024-11-25 12:50:18.045043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.179 [2024-11-25 12:50:18.045051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.179 [2024-11-25 12:50:18.045058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.179 [2024-11-25 12:50:18.045064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.179 [2024-11-25 12:50:18.046624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.179 [2024-11-25 12:50:18.046758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.179 [2024-11-25 12:50:18.046914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.179 [2024-11-25 12:50:18.046914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:39.119 12:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:40.058 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:40.058 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:40.058 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:40.058 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:40.058 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:40.058 12:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:40.318 Malloc1 00:15:40.318 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:40.579 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:40.840 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:40.840 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:40.840 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:40.840 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:41.100 Malloc2 00:15:41.100 12:50:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:41.361 12:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:41.361 12:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:41.622 12:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:41.622 12:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:41.622 12:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:41.622 12:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:41.622 12:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:41.622 12:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:41.622 [2024-11-25 12:50:21.467350] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:15:41.622 [2024-11-25 12:50:21.467394] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid581330 ] 00:15:41.622 [2024-11-25 12:50:21.523011] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:41.885 [2024-11-25 12:50:21.531143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:41.885 [2024-11-25 12:50:21.531165] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f976bb1d000 00:15:41.885 [2024-11-25 12:50:21.532145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.885 [2024-11-25 12:50:21.533143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.885 [2024-11-25 12:50:21.534148] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.885 [2024-11-25 12:50:21.535152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.885 [2024-11-25 12:50:21.536160] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.885 [2024-11-25 12:50:21.537163] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.885 [2024-11-25 12:50:21.538171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.885 [2024-11-25 12:50:21.539174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.885 [2024-11-25 12:50:21.540183] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:41.885 [2024-11-25 12:50:21.540193] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f976bb12000 00:15:41.885 [2024-11-25 12:50:21.541518] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:41.885 [2024-11-25 12:50:21.563014] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:41.885 [2024-11-25 12:50:21.563046] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:41.885 [2024-11-25 12:50:21.565321] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:41.885 [2024-11-25 12:50:21.565365] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:41.885 [2024-11-25 12:50:21.565449] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:41.885 [2024-11-25 12:50:21.565465] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:41.885 [2024-11-25 12:50:21.565471] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:41.885 [2024-11-25 12:50:21.566320] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:41.885 [2024-11-25 12:50:21.566330] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:41.886 [2024-11-25 12:50:21.566337] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:41.886 [2024-11-25 12:50:21.567325] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:41.886 [2024-11-25 12:50:21.567334] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:41.886 [2024-11-25 12:50:21.567343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:41.886 [2024-11-25 12:50:21.568331] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:41.886 [2024-11-25 12:50:21.568340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:41.886 [2024-11-25 12:50:21.569333] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:41.886 [2024-11-25 12:50:21.569342] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:41.886 [2024-11-25 12:50:21.569347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:41.886 [2024-11-25 12:50:21.569354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:41.886 [2024-11-25 12:50:21.569462] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:41.886 [2024-11-25 12:50:21.569466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:41.886 [2024-11-25 12:50:21.569472] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:41.886 [2024-11-25 12:50:21.570345] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:41.886 [2024-11-25 12:50:21.571340] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:41.886 [2024-11-25 12:50:21.572349] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:41.886 [2024-11-25 12:50:21.573346] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.886 [2024-11-25 12:50:21.573414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:41.886 [2024-11-25 12:50:21.574354] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:41.886 [2024-11-25 12:50:21.574362] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:41.886 [2024-11-25 12:50:21.574368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574389] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:41.886 [2024-11-25 12:50:21.574397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574414] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:41.886 [2024-11-25 12:50:21.574419] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.886 [2024-11-25 12:50:21.574423] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.886 [2024-11-25 12:50:21.574437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.886 [2024-11-25 12:50:21.574470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:41.886 [2024-11-25 12:50:21.574479] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:41.886 [2024-11-25 12:50:21.574484] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:41.886 [2024-11-25 12:50:21.574489] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:41.886 [2024-11-25 12:50:21.574496] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:41.886 [2024-11-25 12:50:21.574503] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:41.886 [2024-11-25 12:50:21.574508] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:41.886 [2024-11-25 12:50:21.574513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:41.886 [2024-11-25 12:50:21.574543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:41.886 [2024-11-25 12:50:21.574553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.886 [2024-11-25 12:50:21.574562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.886 [2024-11-25 12:50:21.574570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.886 [2024-11-25 12:50:21.574579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.886 [2024-11-25 12:50:21.574584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:41.886 [2024-11-25 12:50:21.574607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:41.886 [2024-11-25 12:50:21.574615] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:41.886 [2024-11-25 12:50:21.574620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:41.886 [2024-11-25 12:50:21.574649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:41.886 [2024-11-25 12:50:21.574710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574725] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:41.886 [2024-11-25 12:50:21.574730] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:41.886 [2024-11-25 12:50:21.574735] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.886 [2024-11-25 12:50:21.574741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:41.886 [2024-11-25 12:50:21.574753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:41.886 [2024-11-25 12:50:21.574766] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:41.886 [2024-11-25 12:50:21.574775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574790] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:41.886 [2024-11-25 12:50:21.574794] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.886 [2024-11-25 12:50:21.574797] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.886 [2024-11-25 12:50:21.574804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.886 [2024-11-25 12:50:21.574818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:41.886 [2024-11-25 12:50:21.574831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574846] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:41.886 [2024-11-25 12:50:21.574850] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.886 [2024-11-25 12:50:21.574854] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.886 [2024-11-25 12:50:21.574860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.886 [2024-11-25 12:50:21.574881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:41.886 [2024-11-25 12:50:21.574889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:41.886 [2024-11-25 12:50:21.574904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:41.887 [2024-11-25 12:50:21.574910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:41.887 [2024-11-25 12:50:21.574916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:41.887 [2024-11-25 12:50:21.574921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:41.887 [2024-11-25 12:50:21.574926] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:41.887 [2024-11-25 12:50:21.574933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:41.887 [2024-11-25 12:50:21.574938] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:41.887 [2024-11-25 12:50:21.574955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:41.887 [2024-11-25 12:50:21.574965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:41.887 [2024-11-25 12:50:21.574978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:41.887 [2024-11-25 12:50:21.574985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:41.887 [2024-11-25 12:50:21.574996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:41.887 [2024-11-25 12:50:21.575006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:41.887 [2024-11-25 12:50:21.575018] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:41.887 [2024-11-25 12:50:21.575030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:41.887 [2024-11-25 12:50:21.575044] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:41.887 [2024-11-25 12:50:21.575049] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:41.887 [2024-11-25 12:50:21.575052] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:41.887 [2024-11-25 12:50:21.575056] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:41.887 [2024-11-25 12:50:21.575059] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:41.887 [2024-11-25 12:50:21.575066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:41.887 [2024-11-25 12:50:21.575073] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:41.887 [2024-11-25 12:50:21.575078] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:41.887 [2024-11-25 12:50:21.575081] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.887 [2024-11-25 12:50:21.575087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:41.887 [2024-11-25 12:50:21.575095] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:41.887 [2024-11-25 12:50:21.575099] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.887 [2024-11-25 12:50:21.575102] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.887 [2024-11-25 12:50:21.575108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.887 [2024-11-25 12:50:21.575116] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:41.887 [2024-11-25 12:50:21.575121] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:41.887 [2024-11-25 12:50:21.575124] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.887 [2024-11-25 12:50:21.575130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:41.887 [2024-11-25 12:50:21.575137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:41.887 [2024-11-25 12:50:21.575151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:41.887 [2024-11-25 12:50:21.575163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:41.887 [2024-11-25 12:50:21.575171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:41.887 ===================================================== 00:15:41.887 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:41.887 ===================================================== 00:15:41.887 Controller Capabilities/Features 00:15:41.887 ================================ 00:15:41.887 Vendor ID: 4e58 00:15:41.887 Subsystem Vendor ID: 4e58 00:15:41.887 Serial Number: SPDK1 00:15:41.887 Model Number: SPDK bdev Controller 00:15:41.887 Firmware Version: 25.01 00:15:41.887 Recommended Arb Burst: 6 00:15:41.887 IEEE OUI Identifier: 8d 6b 50 00:15:41.887 Multi-path I/O 00:15:41.887 May have multiple subsystem ports: Yes 00:15:41.887 May have multiple controllers: Yes 00:15:41.887 Associated with SR-IOV VF: No 00:15:41.887 Max Data Transfer Size: 131072 00:15:41.887 Max Number of Namespaces: 32 00:15:41.887 Max Number of I/O Queues: 127 00:15:41.887 NVMe Specification Version (VS): 1.3 00:15:41.887 NVMe Specification Version (Identify): 1.3 00:15:41.887 Maximum Queue Entries: 256 00:15:41.887 Contiguous Queues Required: Yes 00:15:41.887 Arbitration Mechanisms Supported 00:15:41.887 Weighted Round Robin: Not Supported 00:15:41.887 Vendor Specific: Not Supported 00:15:41.887 Reset Timeout: 15000 ms 00:15:41.887 Doorbell Stride: 4 bytes 00:15:41.887 NVM Subsystem Reset: Not Supported 00:15:41.887 Command Sets Supported 00:15:41.887 NVM Command Set: Supported 00:15:41.887 Boot Partition: Not Supported 00:15:41.887 Memory Page Size Minimum: 4096 bytes 00:15:41.887 Memory Page Size Maximum: 4096 bytes 00:15:41.887 Persistent Memory Region: Not Supported 00:15:41.887 Optional Asynchronous Events Supported 00:15:41.887 Namespace Attribute Notices: Supported 00:15:41.887 Firmware Activation Notices: Not Supported 00:15:41.887 ANA Change Notices: Not Supported 00:15:41.887 PLE Aggregate Log Change Notices: Not Supported 00:15:41.887 LBA Status Info Alert Notices: Not Supported 00:15:41.887 EGE Aggregate Log Change Notices: Not Supported 00:15:41.887 Normal NVM Subsystem Shutdown event: Not Supported 00:15:41.887 Zone Descriptor Change Notices: Not Supported 00:15:41.887 Discovery Log Change Notices: Not Supported 00:15:41.887 Controller Attributes 00:15:41.887 128-bit Host Identifier: Supported 00:15:41.887 Non-Operational Permissive Mode: Not Supported 00:15:41.887 NVM Sets: Not Supported 00:15:41.887 Read Recovery Levels: Not Supported 00:15:41.887 Endurance Groups: Not Supported 00:15:41.887 Predictable Latency Mode: Not Supported 00:15:41.887 Traffic Based Keep ALive: Not Supported 00:15:41.887 Namespace Granularity: Not Supported 00:15:41.887 SQ Associations: Not Supported 00:15:41.887 UUID List: Not Supported 00:15:41.887 Multi-Domain Subsystem: Not Supported 00:15:41.887 Fixed Capacity Management: Not Supported 00:15:41.887 Variable Capacity Management: Not Supported 00:15:41.887 Delete Endurance Group: Not Supported 00:15:41.887 Delete NVM Set: Not Supported 00:15:41.887 Extended LBA Formats Supported: Not Supported 00:15:41.887 Flexible Data Placement Supported: Not Supported 00:15:41.887 00:15:41.887 Controller Memory Buffer Support 00:15:41.887 ================================ 00:15:41.887 Supported: No 00:15:41.887 00:15:41.887 Persistent Memory Region Support 00:15:41.887 ================================ 00:15:41.887 Supported: No 00:15:41.887 00:15:41.887 Admin Command Set Attributes 00:15:41.887 ============================ 00:15:41.887 Security Send/Receive: Not Supported 00:15:41.887 Format NVM: Not Supported 00:15:41.887 Firmware Activate/Download: Not Supported 00:15:41.887 Namespace Management: Not Supported 00:15:41.887 Device Self-Test: Not Supported 00:15:41.887 Directives: Not Supported 00:15:41.887 NVMe-MI: Not Supported 00:15:41.887 Virtualization Management: Not Supported 00:15:41.887 Doorbell Buffer Config: Not Supported 00:15:41.887 Get LBA Status Capability: Not Supported 00:15:41.887 Command & Feature Lockdown Capability: Not Supported 00:15:41.887 Abort Command Limit: 4 00:15:41.887 Async Event Request Limit: 4 00:15:41.887 Number of Firmware Slots: N/A 00:15:41.887 Firmware Slot 1 Read-Only: N/A 00:15:41.887 Firmware Activation Without Reset: N/A 00:15:41.887 Multiple Update Detection Support: N/A 00:15:41.887 Firmware Update Granularity: No Information Provided 00:15:41.887 Per-Namespace SMART Log: No 00:15:41.887 Asymmetric Namespace Access Log Page: Not Supported 00:15:41.887 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:41.887 Command Effects Log Page: Supported 00:15:41.887 Get Log Page Extended Data: Supported 00:15:41.887 Telemetry Log Pages: Not Supported 00:15:41.887 Persistent Event Log Pages: Not Supported 00:15:41.887 Supported Log Pages Log Page: May Support 00:15:41.887 Commands Supported & Effects Log Page: Not Supported 00:15:41.887 Feature Identifiers & Effects Log Page:May Support 00:15:41.887 NVMe-MI Commands & Effects Log Page: May Support 00:15:41.887 Data Area 4 for Telemetry Log: Not Supported 00:15:41.887 Error Log Page Entries Supported: 128 00:15:41.887 Keep Alive: Supported 00:15:41.887 Keep Alive Granularity: 10000 ms 00:15:41.887 00:15:41.887 NVM Command Set Attributes 00:15:41.887 ========================== 00:15:41.887 Submission Queue Entry Size 00:15:41.888 Max: 64 00:15:41.888 Min: 64 00:15:41.888 Completion Queue Entry Size 00:15:41.888 Max: 16 00:15:41.888 Min: 16 00:15:41.888 Number of Namespaces: 32 00:15:41.888 Compare Command: Supported 00:15:41.888 Write Uncorrectable Command: Not Supported 00:15:41.888 Dataset Management Command: Supported 00:15:41.888 Write Zeroes Command: Supported 00:15:41.888 Set Features Save Field: Not Supported 00:15:41.888 Reservations: Not Supported 00:15:41.888 Timestamp: Not Supported 00:15:41.888 Copy: Supported 00:15:41.888 Volatile Write Cache: Present 00:15:41.888 Atomic Write Unit (Normal): 1 00:15:41.888 Atomic Write Unit (PFail): 1 00:15:41.888 Atomic Compare & Write Unit: 1 00:15:41.888 Fused Compare & Write: Supported 00:15:41.888 Scatter-Gather List 00:15:41.888 SGL Command Set: Supported (Dword aligned) 00:15:41.888 SGL Keyed: Not Supported 00:15:41.888 SGL Bit Bucket Descriptor: Not Supported 00:15:41.888 SGL Metadata Pointer: Not Supported 00:15:41.888 Oversized SGL: Not Supported 00:15:41.888 SGL Metadata Address: Not Supported 00:15:41.888 SGL Offset: Not Supported 00:15:41.888 Transport SGL Data Block: Not Supported 00:15:41.888 Replay Protected Memory Block: Not Supported 00:15:41.888 00:15:41.888 Firmware Slot Information 00:15:41.888 ========================= 00:15:41.888 Active slot: 1 00:15:41.888 Slot 1 Firmware Revision: 25.01 00:15:41.888 00:15:41.888 00:15:41.888 Commands Supported and Effects 00:15:41.888 ============================== 00:15:41.888 Admin Commands 00:15:41.888 -------------- 00:15:41.888 Get Log Page (02h): Supported 00:15:41.888 Identify (06h): Supported 00:15:41.888 Abort (08h): Supported 00:15:41.888 Set Features (09h): Supported 00:15:41.888 Get Features (0Ah): Supported 00:15:41.888 Asynchronous Event Request (0Ch): Supported 00:15:41.888 Keep Alive (18h): Supported 00:15:41.888 I/O Commands 00:15:41.888 ------------ 00:15:41.888 Flush (00h): Supported LBA-Change 00:15:41.888 Write (01h): Supported LBA-Change 00:15:41.888 Read (02h): Supported 00:15:41.888 Compare (05h): Supported 00:15:41.888 Write Zeroes (08h): Supported LBA-Change 00:15:41.888 Dataset Management (09h): Supported LBA-Change 00:15:41.888 Copy (19h): Supported LBA-Change 00:15:41.888 00:15:41.888 Error Log 00:15:41.888 ========= 00:15:41.888 00:15:41.888 Arbitration 00:15:41.888 =========== 00:15:41.888 Arbitration Burst: 1 00:15:41.888 00:15:41.888 Power Management 00:15:41.888 ================ 00:15:41.888 Number of Power States: 1 00:15:41.888 Current Power State: Power State #0 00:15:41.888 Power State #0: 00:15:41.888 Max Power: 0.00 W 00:15:41.888 Non-Operational State: Operational 00:15:41.888 Entry Latency: Not Reported 00:15:41.888 Exit Latency: Not Reported 00:15:41.888 Relative Read Throughput: 0 00:15:41.888 Relative Read Latency: 0 00:15:41.888 Relative Write Throughput: 0 00:15:41.888 Relative Write Latency: 0 00:15:41.888 Idle Power: Not Reported 00:15:41.888 Active Power: Not Reported 00:15:41.888 Non-Operational Permissive Mode: Not Supported 00:15:41.888 00:15:41.888 Health Information 00:15:41.888 ================== 00:15:41.888 Critical Warnings: 00:15:41.888 Available Spare Space: OK 00:15:41.888 Temperature: OK 00:15:41.888 Device Reliability: OK 00:15:41.888 Read Only: No 00:15:41.888 Volatile Memory Backup: OK 00:15:41.888 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:41.888 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:41.888 Available Spare: 0% 00:15:41.888 Available Sp[2024-11-25 12:50:21.575270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:41.888 [2024-11-25 12:50:21.575279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:41.888 [2024-11-25 12:50:21.575306] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:41.888 [2024-11-25 12:50:21.575316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.888 [2024-11-25 12:50:21.575322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.888 [2024-11-25 12:50:21.575329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.888 [2024-11-25 12:50:21.575335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.888 [2024-11-25 12:50:21.575368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:41.888 [2024-11-25 12:50:21.575378] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:41.888 [2024-11-25 12:50:21.576370] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.888 [2024-11-25 12:50:21.576411] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:41.888 [2024-11-25 12:50:21.576417] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:41.888 [2024-11-25 12:50:21.577378] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:41.888 [2024-11-25 12:50:21.577388] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:41.888 [2024-11-25 12:50:21.577449] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:41.888 [2024-11-25 12:50:21.582869] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:41.888 are Threshold: 0% 00:15:41.888 Life Percentage Used: 0% 00:15:41.888 Data Units Read: 0 00:15:41.888 Data Units Written: 0 00:15:41.888 Host Read Commands: 0 00:15:41.888 Host Write Commands: 0 00:15:41.888 Controller Busy Time: 0 minutes 00:15:41.888 Power Cycles: 0 00:15:41.888 Power On Hours: 0 hours 00:15:41.888 Unsafe Shutdowns: 0 00:15:41.888 Unrecoverable Media Errors: 0 00:15:41.888 Lifetime Error Log Entries: 0 00:15:41.888 Warning Temperature Time: 0 minutes 00:15:41.888 Critical Temperature Time: 0 minutes 00:15:41.888 00:15:41.888 Number of Queues 00:15:41.888 ================ 00:15:41.888 Number of I/O Submission Queues: 127 00:15:41.888 Number of I/O Completion Queues: 127 00:15:41.888 00:15:41.888 Active Namespaces 00:15:41.888 ================= 00:15:41.888 Namespace ID:1 00:15:41.888 Error Recovery Timeout: Unlimited 00:15:41.888 Command Set Identifier: NVM (00h) 00:15:41.888 Deallocate: Supported 00:15:41.888 Deallocated/Unwritten Error: Not Supported 00:15:41.888 Deallocated Read Value: Unknown 00:15:41.888 Deallocate in Write Zeroes: Not Supported 00:15:41.888 Deallocated Guard Field: 0xFFFF 00:15:41.888 Flush: Supported 00:15:41.888 Reservation: Supported 00:15:41.888 Namespace Sharing Capabilities: Multiple Controllers 00:15:41.888 Size (in LBAs): 131072 (0GiB) 00:15:41.888 Capacity (in LBAs): 131072 (0GiB) 00:15:41.888 Utilization (in LBAs): 131072 (0GiB) 00:15:41.888 NGUID: 54D6027E1F724C2188E1E69266C435AB 00:15:41.888 UUID: 54d6027e-1f72-4c21-88e1-e69266c435ab 00:15:41.888 Thin Provisioning: Not Supported 00:15:41.888 Per-NS Atomic Units: Yes 00:15:41.888 Atomic Boundary Size (Normal): 0 00:15:41.888 Atomic Boundary Size (PFail): 0 00:15:41.888 Atomic Boundary Offset: 0 00:15:41.888 Maximum Single Source Range Length: 65535 00:15:41.888 Maximum Copy Length: 65535 00:15:41.888 Maximum Source Range Count: 1 00:15:41.888 NGUID/EUI64 Never Reused: No 00:15:41.888 Namespace Write Protected: No 00:15:41.888 Number of LBA Formats: 1 00:15:41.888 Current LBA Format: LBA Format #00 00:15:41.888 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:41.888 00:15:41.888 12:50:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:41.888 [2024-11-25 12:50:21.776535] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:47.181 Initializing NVMe Controllers 00:15:47.181 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:47.181 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:47.181 Initialization complete. Launching workers. 00:15:47.181 ======================================================== 00:15:47.181 Latency(us) 00:15:47.181 Device Information : IOPS MiB/s Average min max 00:15:47.181 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40006.65 156.28 3199.34 847.02 6936.17 00:15:47.181 ======================================================== 00:15:47.181 Total : 40006.65 156.28 3199.34 847.02 6936.17 00:15:47.181 00:15:47.181 [2024-11-25 12:50:26.795159] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:47.181 12:50:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:47.181 [2024-11-25 12:50:26.988033] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:52.465 Initializing NVMe Controllers 00:15:52.465 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:52.465 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:52.465 Initialization complete. Launching workers. 00:15:52.465 ======================================================== 00:15:52.465 Latency(us) 00:15:52.465 Device Information : IOPS MiB/s Average min max 00:15:52.465 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7993.50 3990.20 15960.96 00:15:52.465 ======================================================== 00:15:52.465 Total : 16025.60 62.60 7993.50 3990.20 15960.96 00:15:52.465 00:15:52.465 [2024-11-25 12:50:32.024098] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:52.465 12:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:52.465 [2024-11-25 12:50:32.240985] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:57.760 [2024-11-25 12:50:37.306023] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:57.760 Initializing NVMe Controllers 00:15:57.760 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:57.760 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:57.760 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:57.760 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:57.760 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:57.760 Initialization complete. Launching workers. 00:15:57.760 Starting thread on core 2 00:15:57.760 Starting thread on core 3 00:15:57.760 Starting thread on core 1 00:15:57.760 12:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:57.760 [2024-11-25 12:50:37.605220] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:01.970 [2024-11-25 12:50:41.151021] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:01.970 Initializing NVMe Controllers 00:16:01.970 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.970 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.970 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:01.970 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:01.970 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:01.970 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:01.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:01.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:01.970 Initialization complete. Launching workers. 00:16:01.970 Starting thread on core 1 with urgent priority queue 00:16:01.970 Starting thread on core 2 with urgent priority queue 00:16:01.970 Starting thread on core 3 with urgent priority queue 00:16:01.970 Starting thread on core 0 with urgent priority queue 00:16:01.970 SPDK bdev Controller (SPDK1 ) core 0: 6196.00 IO/s 16.14 secs/100000 ios 00:16:01.970 SPDK bdev Controller (SPDK1 ) core 1: 9583.33 IO/s 10.43 secs/100000 ios 00:16:01.970 SPDK bdev Controller (SPDK1 ) core 2: 6989.00 IO/s 14.31 secs/100000 ios 00:16:01.970 SPDK bdev Controller (SPDK1 ) core 3: 8730.67 IO/s 11.45 secs/100000 ios 00:16:01.970 ======================================================== 00:16:01.970 00:16:01.970 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:01.970 [2024-11-25 12:50:41.473391] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:01.970 Initializing NVMe Controllers 00:16:01.970 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.970 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.970 Namespace ID: 1 size: 0GB 00:16:01.970 Initialization complete. 00:16:01.970 INFO: using host memory buffer for IO 00:16:01.970 Hello world! 00:16:01.970 [2024-11-25 12:50:41.506606] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:01.970 12:50:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:01.970 [2024-11-25 12:50:41.801244] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.359 Initializing NVMe Controllers 00:16:03.359 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.359 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.359 Initialization complete. Launching workers. 00:16:03.359 submit (in ns) avg, min, max = 8153.0, 3946.7, 4996405.8 00:16:03.359 complete (in ns) avg, min, max = 16962.0, 2371.7, 4045965.0 00:16:03.359 00:16:03.359 Submit histogram 00:16:03.359 ================ 00:16:03.359 Range in us Cumulative Count 00:16:03.359 3.947 - 3.973: 1.5076% ( 286) 00:16:03.359 3.973 - 4.000: 6.8631% ( 1016) 00:16:03.359 4.000 - 4.027: 17.1314% ( 1948) 00:16:03.359 4.027 - 4.053: 30.8155% ( 2596) 00:16:03.359 4.053 - 4.080: 43.1501% ( 2340) 00:16:03.359 4.080 - 4.107: 58.5894% ( 2929) 00:16:03.359 4.107 - 4.133: 75.4731% ( 3203) 00:16:03.359 4.133 - 4.160: 87.9869% ( 2374) 00:16:03.359 4.160 - 4.187: 94.3282% ( 1203) 00:16:03.359 4.187 - 4.213: 97.6859% ( 637) 00:16:03.359 4.213 - 4.240: 99.0143% ( 252) 00:16:03.359 4.240 - 4.267: 99.3464% ( 63) 00:16:03.359 4.267 - 4.293: 99.4571% ( 21) 00:16:03.359 4.293 - 4.320: 99.4834% ( 5) 00:16:03.359 4.453 - 4.480: 99.4887% ( 1) 00:16:03.359 4.533 - 4.560: 99.4940% ( 1) 00:16:03.359 4.560 - 4.587: 99.4992% ( 1) 00:16:03.359 4.640 - 4.667: 99.5045% ( 1) 00:16:03.359 4.747 - 4.773: 99.5098% ( 1) 00:16:03.359 5.147 - 5.173: 99.5150% ( 1) 00:16:03.359 5.387 - 5.413: 99.5203% ( 1) 00:16:03.359 5.707 - 5.733: 99.5256% ( 1) 00:16:03.359 5.760 - 5.787: 99.5309% ( 1) 00:16:03.359 5.813 - 5.840: 99.5361% ( 1) 00:16:03.359 5.893 - 5.920: 99.5467% ( 2) 00:16:03.359 6.027 - 6.053: 99.5519% ( 1) 00:16:03.359 6.133 - 6.160: 99.5572% ( 1) 00:16:03.359 6.160 - 6.187: 99.5625% ( 1) 00:16:03.359 6.187 - 6.213: 99.5678% ( 1) 00:16:03.359 6.213 - 6.240: 99.5836% ( 3) 00:16:03.359 6.240 - 6.267: 99.5994% ( 3) 00:16:03.359 6.267 - 6.293: 99.6099% ( 2) 00:16:03.359 6.293 - 6.320: 99.6205% ( 2) 00:16:03.359 6.320 - 6.347: 99.6257% ( 1) 00:16:03.359 6.347 - 6.373: 99.6363% ( 2) 00:16:03.359 6.453 - 6.480: 99.6416% ( 1) 00:16:03.359 6.480 - 6.507: 99.6521% ( 2) 00:16:03.359 6.507 - 6.533: 99.6626% ( 2) 00:16:03.359 6.560 - 6.587: 99.6837% ( 4) 00:16:03.359 6.587 - 6.613: 99.6943% ( 2) 00:16:03.359 6.613 - 6.640: 99.6995% ( 1) 00:16:03.359 6.640 - 6.667: 99.7101% ( 2) 00:16:03.359 6.693 - 6.720: 99.7154% ( 1) 00:16:03.359 6.720 - 6.747: 99.7259% ( 2) 00:16:03.359 6.773 - 6.800: 99.7312% ( 1) 00:16:03.359 6.800 - 6.827: 99.7470% ( 3) 00:16:03.359 6.827 - 6.880: 99.7523% ( 1) 00:16:03.359 6.880 - 6.933: 99.7681% ( 3) 00:16:03.359 6.987 - 7.040: 99.7733% ( 1) 00:16:03.359 7.040 - 7.093: 99.7839% ( 2) 00:16:03.359 7.093 - 7.147: 99.7944% ( 2) 00:16:03.359 7.147 - 7.200: 99.8050% ( 2) 00:16:03.359 7.200 - 7.253: 99.8261% ( 4) 00:16:03.359 7.253 - 7.307: 99.8313% ( 1) 00:16:03.359 7.360 - 7.413: 99.8366% ( 1) 00:16:03.359 7.413 - 7.467: 99.8471% ( 2) 00:16:03.359 7.467 - 7.520: 99.8524% ( 1) 00:16:03.359 7.520 - 7.573: 99.8629% ( 2) 00:16:03.359 7.573 - 7.627: 99.8735% ( 2) 00:16:03.359 7.627 - 7.680: 99.8788% ( 1) 00:16:03.359 7.733 - 7.787: 99.8840% ( 1) 00:16:03.359 7.787 - 7.840: 99.8893% ( 1) 00:16:03.359 8.160 - 8.213: 99.8946% ( 1) 00:16:03.359 17.173 - 17.280: 99.8998% ( 1) 00:16:03.359 3986.773 - 4014.080: 99.9947% ( 18) 00:16:03.359 4969.813 - 4997.120: 100.0000% ( 1) 00:16:03.359 00:16:03.359 Complete histogram 00:16:03.359 ================== 00:16:03.359 Range in us Cumulative Count 00:16:03.359 2.360 - 2.373: 0.0053% ( 1) 00:16:03.359 2.387 - 2.400: 0.8170% ( 154) 00:16:03.359 2.400 - 2.413: 1.0015% ( 35) 00:16:03.359 2.413 - 2.427: 1.1807% ( 34) 00:16:03.359 2.427 - 2.440: 2.9202% ( 330) 00:16:03.359 2.440 - 2.453: 47.3196% ( 8423) 00:16:03.359 2.453 - 2.467: 59.2747% ( 2268) 00:16:03.359 2.467 - 2.480: 71.9941% ( 2413) 00:16:03.359 2.480 - 2.493: 78.7518% ( 1282) 00:16:03.359 2.493 - 2.507: 81.2134% ( 467) 00:16:03.359 2.507 - 2.520: 85.7730% ( 865) 00:16:03.359 2.520 - [2024-11-25 12:50:42.824681] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:03.359 2.533: 91.8613% ( 1155) 00:16:03.359 2.533 - 2.547: 95.7778% ( 743) 00:16:03.359 2.547 - 2.560: 97.7281% ( 370) 00:16:03.359 2.560 - 2.573: 98.7982% ( 203) 00:16:03.359 2.573 - 2.587: 99.3042% ( 96) 00:16:03.359 2.587 - 2.600: 99.4571% ( 29) 00:16:03.359 2.600 - 2.613: 99.4729% ( 3) 00:16:03.359 2.627 - 2.640: 99.4782% ( 1) 00:16:03.359 4.480 - 4.507: 99.4834% ( 1) 00:16:03.359 4.533 - 4.560: 99.4887% ( 1) 00:16:03.359 4.640 - 4.667: 99.4940% ( 1) 00:16:03.359 4.693 - 4.720: 99.4992% ( 1) 00:16:03.359 4.800 - 4.827: 99.5045% ( 1) 00:16:03.359 4.853 - 4.880: 99.5150% ( 2) 00:16:03.359 4.960 - 4.987: 99.5203% ( 1) 00:16:03.359 4.987 - 5.013: 99.5256% ( 1) 00:16:03.359 5.067 - 5.093: 99.5309% ( 1) 00:16:03.359 5.093 - 5.120: 99.5414% ( 2) 00:16:03.359 5.147 - 5.173: 99.5467% ( 1) 00:16:03.359 5.280 - 5.307: 99.5519% ( 1) 00:16:03.359 5.307 - 5.333: 99.5572% ( 1) 00:16:03.359 5.333 - 5.360: 99.5625% ( 1) 00:16:03.359 5.360 - 5.387: 99.5678% ( 1) 00:16:03.359 5.387 - 5.413: 99.5730% ( 1) 00:16:03.359 5.440 - 5.467: 99.5783% ( 1) 00:16:03.359 5.520 - 5.547: 99.5836% ( 1) 00:16:03.359 5.920 - 5.947: 99.5888% ( 1) 00:16:03.359 6.080 - 6.107: 99.5941% ( 1) 00:16:03.359 6.107 - 6.133: 99.5994% ( 1) 00:16:03.359 6.320 - 6.347: 99.6047% ( 1) 00:16:03.359 7.520 - 7.573: 99.6099% ( 1) 00:16:03.359 7.840 - 7.893: 99.6152% ( 1) 00:16:03.359 11.947 - 12.000: 99.6205% ( 1) 00:16:03.359 12.907 - 12.960: 99.6257% ( 1) 00:16:03.359 44.800 - 45.013: 99.6310% ( 1) 00:16:03.359 84.907 - 85.333: 99.6363% ( 1) 00:16:03.359 2976.427 - 2990.080: 99.6416% ( 1) 00:16:03.359 3986.773 - 4014.080: 99.9895% ( 66) 00:16:03.359 4014.080 - 4041.387: 99.9947% ( 1) 00:16:03.359 4041.387 - 4068.693: 100.0000% ( 1) 00:16:03.359 00:16:03.359 12:50:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:03.359 12:50:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:03.359 12:50:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:03.359 12:50:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:03.359 12:50:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:03.359 [ 00:16:03.359 { 00:16:03.359 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:03.359 "subtype": "Discovery", 00:16:03.359 "listen_addresses": [], 00:16:03.359 "allow_any_host": true, 00:16:03.359 "hosts": [] 00:16:03.359 }, 00:16:03.359 { 00:16:03.359 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:03.359 "subtype": "NVMe", 00:16:03.359 "listen_addresses": [ 00:16:03.359 { 00:16:03.359 "trtype": "VFIOUSER", 00:16:03.359 "adrfam": "IPv4", 00:16:03.359 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:03.359 "trsvcid": "0" 00:16:03.359 } 00:16:03.359 ], 00:16:03.359 "allow_any_host": true, 00:16:03.359 "hosts": [], 00:16:03.359 "serial_number": "SPDK1", 00:16:03.359 "model_number": "SPDK bdev Controller", 00:16:03.359 "max_namespaces": 32, 00:16:03.359 "min_cntlid": 1, 00:16:03.359 "max_cntlid": 65519, 00:16:03.359 "namespaces": [ 00:16:03.359 { 00:16:03.359 "nsid": 1, 00:16:03.359 "bdev_name": "Malloc1", 00:16:03.359 "name": "Malloc1", 00:16:03.359 "nguid": "54D6027E1F724C2188E1E69266C435AB", 00:16:03.359 "uuid": "54d6027e-1f72-4c21-88e1-e69266c435ab" 00:16:03.359 } 00:16:03.359 ] 00:16:03.359 }, 00:16:03.359 { 00:16:03.359 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:03.359 "subtype": "NVMe", 00:16:03.359 "listen_addresses": [ 00:16:03.359 { 00:16:03.359 "trtype": "VFIOUSER", 00:16:03.359 "adrfam": "IPv4", 00:16:03.359 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:03.359 "trsvcid": "0" 00:16:03.359 } 00:16:03.359 ], 00:16:03.359 "allow_any_host": true, 00:16:03.360 "hosts": [], 00:16:03.360 "serial_number": "SPDK2", 00:16:03.360 "model_number": "SPDK bdev Controller", 00:16:03.360 "max_namespaces": 32, 00:16:03.360 "min_cntlid": 1, 00:16:03.360 "max_cntlid": 65519, 00:16:03.360 "namespaces": [ 00:16:03.360 { 00:16:03.360 "nsid": 1, 00:16:03.360 "bdev_name": "Malloc2", 00:16:03.360 "name": "Malloc2", 00:16:03.360 "nguid": "7A8E3DFECAE54A50B1FA8DFE3149358F", 00:16:03.360 "uuid": "7a8e3dfe-cae5-4a50-b1fa-8dfe3149358f" 00:16:03.360 } 00:16:03.360 ] 00:16:03.360 } 00:16:03.360 ] 00:16:03.360 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:03.360 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=585669 00:16:03.360 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:03.360 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:03.360 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:03.360 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.360 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.360 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:03.360 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:03.360 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:03.360 Malloc3 00:16:03.360 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:03.621 [2024-11-25 12:50:43.267272] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.621 [2024-11-25 12:50:43.414217] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:03.621 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:03.621 Asynchronous Event Request test 00:16:03.621 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.621 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.622 Registering asynchronous event callbacks... 00:16:03.622 Starting namespace attribute notice tests for all controllers... 00:16:03.622 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:03.622 aer_cb - Changed Namespace 00:16:03.622 Cleaning up... 00:16:03.884 [ 00:16:03.884 { 00:16:03.884 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:03.884 "subtype": "Discovery", 00:16:03.884 "listen_addresses": [], 00:16:03.884 "allow_any_host": true, 00:16:03.884 "hosts": [] 00:16:03.884 }, 00:16:03.884 { 00:16:03.884 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:03.884 "subtype": "NVMe", 00:16:03.884 "listen_addresses": [ 00:16:03.884 { 00:16:03.884 "trtype": "VFIOUSER", 00:16:03.884 "adrfam": "IPv4", 00:16:03.884 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:03.884 "trsvcid": "0" 00:16:03.884 } 00:16:03.884 ], 00:16:03.884 "allow_any_host": true, 00:16:03.884 "hosts": [], 00:16:03.884 "serial_number": "SPDK1", 00:16:03.884 "model_number": "SPDK bdev Controller", 00:16:03.884 "max_namespaces": 32, 00:16:03.884 "min_cntlid": 1, 00:16:03.884 "max_cntlid": 65519, 00:16:03.884 "namespaces": [ 00:16:03.884 { 00:16:03.884 "nsid": 1, 00:16:03.884 "bdev_name": "Malloc1", 00:16:03.884 "name": "Malloc1", 00:16:03.884 "nguid": "54D6027E1F724C2188E1E69266C435AB", 00:16:03.884 "uuid": "54d6027e-1f72-4c21-88e1-e69266c435ab" 00:16:03.884 }, 00:16:03.884 { 00:16:03.884 "nsid": 2, 00:16:03.884 "bdev_name": "Malloc3", 00:16:03.884 "name": "Malloc3", 00:16:03.884 "nguid": "18F5C08A01C64B63BFD1BE1527758424", 00:16:03.884 "uuid": "18f5c08a-01c6-4b63-bfd1-be1527758424" 00:16:03.884 } 00:16:03.884 ] 00:16:03.884 }, 00:16:03.884 { 00:16:03.884 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:03.884 "subtype": "NVMe", 00:16:03.884 "listen_addresses": [ 00:16:03.884 { 00:16:03.884 "trtype": "VFIOUSER", 00:16:03.884 "adrfam": "IPv4", 00:16:03.884 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:03.884 "trsvcid": "0" 00:16:03.884 } 00:16:03.884 ], 00:16:03.884 "allow_any_host": true, 00:16:03.884 "hosts": [], 00:16:03.884 "serial_number": "SPDK2", 00:16:03.884 "model_number": "SPDK bdev Controller", 00:16:03.884 "max_namespaces": 32, 00:16:03.884 "min_cntlid": 1, 00:16:03.884 "max_cntlid": 65519, 00:16:03.884 "namespaces": [ 00:16:03.884 { 00:16:03.884 "nsid": 1, 00:16:03.884 "bdev_name": "Malloc2", 00:16:03.884 "name": "Malloc2", 00:16:03.884 "nguid": "7A8E3DFECAE54A50B1FA8DFE3149358F", 00:16:03.884 "uuid": "7a8e3dfe-cae5-4a50-b1fa-8dfe3149358f" 00:16:03.884 } 00:16:03.884 ] 00:16:03.884 } 00:16:03.884 ] 00:16:03.884 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 585669 00:16:03.884 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:03.884 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:03.884 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:03.884 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:03.885 [2024-11-25 12:50:43.634147] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:16:03.885 [2024-11-25 12:50:43.634189] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid585691 ] 00:16:03.885 [2024-11-25 12:50:43.687954] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:03.885 [2024-11-25 12:50:43.696107] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:03.885 [2024-11-25 12:50:43.696131] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9bc88f6000 00:16:03.885 [2024-11-25 12:50:43.697104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:03.885 [2024-11-25 12:50:43.698108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:03.885 [2024-11-25 12:50:43.699110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:03.885 [2024-11-25 12:50:43.700117] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:03.885 [2024-11-25 12:50:43.701119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:03.885 [2024-11-25 12:50:43.702124] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:03.885 [2024-11-25 12:50:43.703131] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:03.885 [2024-11-25 12:50:43.704139] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:03.885 [2024-11-25 12:50:43.705156] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:03.885 [2024-11-25 12:50:43.705165] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9bc88eb000 00:16:03.885 [2024-11-25 12:50:43.706490] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:03.885 [2024-11-25 12:50:43.728020] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:03.885 [2024-11-25 12:50:43.728045] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:03.885 [2024-11-25 12:50:43.730103] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:03.885 [2024-11-25 12:50:43.730147] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:03.885 [2024-11-25 12:50:43.730230] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:03.885 [2024-11-25 12:50:43.730244] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:03.885 [2024-11-25 12:50:43.730249] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:03.885 [2024-11-25 12:50:43.731107] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:03.885 [2024-11-25 12:50:43.731116] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:03.885 [2024-11-25 12:50:43.731124] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:03.885 [2024-11-25 12:50:43.732111] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:03.885 [2024-11-25 12:50:43.732120] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:03.885 [2024-11-25 12:50:43.732127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:03.885 [2024-11-25 12:50:43.733119] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:03.885 [2024-11-25 12:50:43.733128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:03.885 [2024-11-25 12:50:43.734123] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:03.885 [2024-11-25 12:50:43.734131] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:03.885 [2024-11-25 12:50:43.734137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:03.885 [2024-11-25 12:50:43.734143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:03.885 [2024-11-25 12:50:43.734251] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:03.885 [2024-11-25 12:50:43.734256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:03.885 [2024-11-25 12:50:43.734261] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:03.885 [2024-11-25 12:50:43.735130] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:03.885 [2024-11-25 12:50:43.736143] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:03.885 [2024-11-25 12:50:43.737155] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:03.885 [2024-11-25 12:50:43.738155] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:03.885 [2024-11-25 12:50:43.738205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:03.885 [2024-11-25 12:50:43.739163] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:03.885 [2024-11-25 12:50:43.739174] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:03.885 [2024-11-25 12:50:43.739179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:03.885 [2024-11-25 12:50:43.739201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:03.885 [2024-11-25 12:50:43.739212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:03.885 [2024-11-25 12:50:43.739224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:03.885 [2024-11-25 12:50:43.739229] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:03.885 [2024-11-25 12:50:43.739233] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:03.885 [2024-11-25 12:50:43.739245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:03.885 [2024-11-25 12:50:43.745872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:03.885 [2024-11-25 12:50:43.745884] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:03.885 [2024-11-25 12:50:43.745889] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:03.885 [2024-11-25 12:50:43.745894] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:03.885 [2024-11-25 12:50:43.745898] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:03.885 [2024-11-25 12:50:43.745906] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:03.885 [2024-11-25 12:50:43.745911] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:03.885 [2024-11-25 12:50:43.745916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:03.885 [2024-11-25 12:50:43.745925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:03.885 [2024-11-25 12:50:43.745935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:03.885 [2024-11-25 12:50:43.753867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:03.885 [2024-11-25 12:50:43.753879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.885 [2024-11-25 12:50:43.753888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.885 [2024-11-25 12:50:43.753896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.885 [2024-11-25 12:50:43.753905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.885 [2024-11-25 12:50:43.753909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:03.885 [2024-11-25 12:50:43.753916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:03.885 [2024-11-25 12:50:43.753925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:03.885 [2024-11-25 12:50:43.761868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:03.885 [2024-11-25 12:50:43.761878] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:03.885 [2024-11-25 12:50:43.761883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:03.885 [2024-11-25 12:50:43.761890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:03.885 [2024-11-25 12:50:43.761896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:03.885 [2024-11-25 12:50:43.761905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:03.885 [2024-11-25 12:50:43.769867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:03.885 [2024-11-25 12:50:43.769930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:03.886 [2024-11-25 12:50:43.769939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:03.886 [2024-11-25 12:50:43.769947] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:03.886 [2024-11-25 12:50:43.769951] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:03.886 [2024-11-25 12:50:43.769955] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:03.886 [2024-11-25 12:50:43.769961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:03.886 [2024-11-25 12:50:43.777870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:03.886 [2024-11-25 12:50:43.777881] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:03.886 [2024-11-25 12:50:43.777890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:03.886 [2024-11-25 12:50:43.777898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:03.886 [2024-11-25 12:50:43.777905] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:03.886 [2024-11-25 12:50:43.777909] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:03.886 [2024-11-25 12:50:43.777913] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:03.886 [2024-11-25 12:50:43.777919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:03.886 [2024-11-25 12:50:43.785868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:03.886 [2024-11-25 12:50:43.785885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:03.886 [2024-11-25 12:50:43.785893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:03.886 [2024-11-25 12:50:43.785901] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:03.886 [2024-11-25 12:50:43.785908] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:03.886 [2024-11-25 12:50:43.785911] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:03.886 [2024-11-25 12:50:43.785917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.149 [2024-11-25 12:50:43.793866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:04.149 [2024-11-25 12:50:43.793877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:04.149 [2024-11-25 12:50:43.793884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:04.149 [2024-11-25 12:50:43.793892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:04.149 [2024-11-25 12:50:43.793897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:04.149 [2024-11-25 12:50:43.793903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:04.149 [2024-11-25 12:50:43.793908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:04.149 [2024-11-25 12:50:43.793913] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:04.149 [2024-11-25 12:50:43.793918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:04.149 [2024-11-25 12:50:43.793923] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:04.149 [2024-11-25 12:50:43.793939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:04.149 [2024-11-25 12:50:43.801868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:04.149 [2024-11-25 12:50:43.801882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:04.149 [2024-11-25 12:50:43.809868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:04.149 [2024-11-25 12:50:43.809881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:04.149 [2024-11-25 12:50:43.817869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:04.149 [2024-11-25 12:50:43.817882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:04.149 [2024-11-25 12:50:43.825866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:04.149 [2024-11-25 12:50:43.825882] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:04.149 [2024-11-25 12:50:43.825887] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:04.149 [2024-11-25 12:50:43.825891] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:04.149 [2024-11-25 12:50:43.825894] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:04.149 [2024-11-25 12:50:43.825898] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:04.149 [2024-11-25 12:50:43.825904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:04.149 [2024-11-25 12:50:43.825914] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:04.149 [2024-11-25 12:50:43.825918] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:04.149 [2024-11-25 12:50:43.825922] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.149 [2024-11-25 12:50:43.825928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:04.149 [2024-11-25 12:50:43.825935] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:04.149 [2024-11-25 12:50:43.825939] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.149 [2024-11-25 12:50:43.825943] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.149 [2024-11-25 12:50:43.825949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.149 [2024-11-25 12:50:43.825956] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:04.149 [2024-11-25 12:50:43.825961] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:04.149 [2024-11-25 12:50:43.825964] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.149 [2024-11-25 12:50:43.825970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:04.149 [2024-11-25 12:50:43.833869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:04.149 [2024-11-25 12:50:43.833883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:04.149 [2024-11-25 12:50:43.833894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:04.149 [2024-11-25 12:50:43.833901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:04.149 ===================================================== 00:16:04.149 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:04.149 ===================================================== 00:16:04.149 Controller Capabilities/Features 00:16:04.149 ================================ 00:16:04.149 Vendor ID: 4e58 00:16:04.149 Subsystem Vendor ID: 4e58 00:16:04.149 Serial Number: SPDK2 00:16:04.149 Model Number: SPDK bdev Controller 00:16:04.149 Firmware Version: 25.01 00:16:04.149 Recommended Arb Burst: 6 00:16:04.149 IEEE OUI Identifier: 8d 6b 50 00:16:04.149 Multi-path I/O 00:16:04.149 May have multiple subsystem ports: Yes 00:16:04.149 May have multiple controllers: Yes 00:16:04.149 Associated with SR-IOV VF: No 00:16:04.149 Max Data Transfer Size: 131072 00:16:04.149 Max Number of Namespaces: 32 00:16:04.149 Max Number of I/O Queues: 127 00:16:04.149 NVMe Specification Version (VS): 1.3 00:16:04.149 NVMe Specification Version (Identify): 1.3 00:16:04.149 Maximum Queue Entries: 256 00:16:04.149 Contiguous Queues Required: Yes 00:16:04.149 Arbitration Mechanisms Supported 00:16:04.149 Weighted Round Robin: Not Supported 00:16:04.149 Vendor Specific: Not Supported 00:16:04.150 Reset Timeout: 15000 ms 00:16:04.150 Doorbell Stride: 4 bytes 00:16:04.150 NVM Subsystem Reset: Not Supported 00:16:04.150 Command Sets Supported 00:16:04.150 NVM Command Set: Supported 00:16:04.150 Boot Partition: Not Supported 00:16:04.150 Memory Page Size Minimum: 4096 bytes 00:16:04.150 Memory Page Size Maximum: 4096 bytes 00:16:04.150 Persistent Memory Region: Not Supported 00:16:04.150 Optional Asynchronous Events Supported 00:16:04.150 Namespace Attribute Notices: Supported 00:16:04.150 Firmware Activation Notices: Not Supported 00:16:04.150 ANA Change Notices: Not Supported 00:16:04.150 PLE Aggregate Log Change Notices: Not Supported 00:16:04.150 LBA Status Info Alert Notices: Not Supported 00:16:04.150 EGE Aggregate Log Change Notices: Not Supported 00:16:04.150 Normal NVM Subsystem Shutdown event: Not Supported 00:16:04.150 Zone Descriptor Change Notices: Not Supported 00:16:04.150 Discovery Log Change Notices: Not Supported 00:16:04.150 Controller Attributes 00:16:04.150 128-bit Host Identifier: Supported 00:16:04.150 Non-Operational Permissive Mode: Not Supported 00:16:04.150 NVM Sets: Not Supported 00:16:04.150 Read Recovery Levels: Not Supported 00:16:04.150 Endurance Groups: Not Supported 00:16:04.150 Predictable Latency Mode: Not Supported 00:16:04.150 Traffic Based Keep ALive: Not Supported 00:16:04.150 Namespace Granularity: Not Supported 00:16:04.150 SQ Associations: Not Supported 00:16:04.150 UUID List: Not Supported 00:16:04.150 Multi-Domain Subsystem: Not Supported 00:16:04.150 Fixed Capacity Management: Not Supported 00:16:04.150 Variable Capacity Management: Not Supported 00:16:04.150 Delete Endurance Group: Not Supported 00:16:04.150 Delete NVM Set: Not Supported 00:16:04.150 Extended LBA Formats Supported: Not Supported 00:16:04.150 Flexible Data Placement Supported: Not Supported 00:16:04.150 00:16:04.150 Controller Memory Buffer Support 00:16:04.150 ================================ 00:16:04.150 Supported: No 00:16:04.150 00:16:04.150 Persistent Memory Region Support 00:16:04.150 ================================ 00:16:04.150 Supported: No 00:16:04.150 00:16:04.150 Admin Command Set Attributes 00:16:04.150 ============================ 00:16:04.150 Security Send/Receive: Not Supported 00:16:04.150 Format NVM: Not Supported 00:16:04.150 Firmware Activate/Download: Not Supported 00:16:04.150 Namespace Management: Not Supported 00:16:04.150 Device Self-Test: Not Supported 00:16:04.150 Directives: Not Supported 00:16:04.150 NVMe-MI: Not Supported 00:16:04.150 Virtualization Management: Not Supported 00:16:04.150 Doorbell Buffer Config: Not Supported 00:16:04.150 Get LBA Status Capability: Not Supported 00:16:04.150 Command & Feature Lockdown Capability: Not Supported 00:16:04.150 Abort Command Limit: 4 00:16:04.150 Async Event Request Limit: 4 00:16:04.150 Number of Firmware Slots: N/A 00:16:04.150 Firmware Slot 1 Read-Only: N/A 00:16:04.150 Firmware Activation Without Reset: N/A 00:16:04.150 Multiple Update Detection Support: N/A 00:16:04.150 Firmware Update Granularity: No Information Provided 00:16:04.150 Per-Namespace SMART Log: No 00:16:04.150 Asymmetric Namespace Access Log Page: Not Supported 00:16:04.150 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:04.150 Command Effects Log Page: Supported 00:16:04.150 Get Log Page Extended Data: Supported 00:16:04.150 Telemetry Log Pages: Not Supported 00:16:04.150 Persistent Event Log Pages: Not Supported 00:16:04.150 Supported Log Pages Log Page: May Support 00:16:04.150 Commands Supported & Effects Log Page: Not Supported 00:16:04.150 Feature Identifiers & Effects Log Page:May Support 00:16:04.150 NVMe-MI Commands & Effects Log Page: May Support 00:16:04.150 Data Area 4 for Telemetry Log: Not Supported 00:16:04.150 Error Log Page Entries Supported: 128 00:16:04.150 Keep Alive: Supported 00:16:04.150 Keep Alive Granularity: 10000 ms 00:16:04.150 00:16:04.150 NVM Command Set Attributes 00:16:04.150 ========================== 00:16:04.150 Submission Queue Entry Size 00:16:04.150 Max: 64 00:16:04.150 Min: 64 00:16:04.150 Completion Queue Entry Size 00:16:04.150 Max: 16 00:16:04.150 Min: 16 00:16:04.150 Number of Namespaces: 32 00:16:04.150 Compare Command: Supported 00:16:04.150 Write Uncorrectable Command: Not Supported 00:16:04.150 Dataset Management Command: Supported 00:16:04.150 Write Zeroes Command: Supported 00:16:04.150 Set Features Save Field: Not Supported 00:16:04.150 Reservations: Not Supported 00:16:04.150 Timestamp: Not Supported 00:16:04.150 Copy: Supported 00:16:04.150 Volatile Write Cache: Present 00:16:04.150 Atomic Write Unit (Normal): 1 00:16:04.150 Atomic Write Unit (PFail): 1 00:16:04.150 Atomic Compare & Write Unit: 1 00:16:04.150 Fused Compare & Write: Supported 00:16:04.150 Scatter-Gather List 00:16:04.150 SGL Command Set: Supported (Dword aligned) 00:16:04.150 SGL Keyed: Not Supported 00:16:04.150 SGL Bit Bucket Descriptor: Not Supported 00:16:04.150 SGL Metadata Pointer: Not Supported 00:16:04.150 Oversized SGL: Not Supported 00:16:04.150 SGL Metadata Address: Not Supported 00:16:04.150 SGL Offset: Not Supported 00:16:04.150 Transport SGL Data Block: Not Supported 00:16:04.150 Replay Protected Memory Block: Not Supported 00:16:04.150 00:16:04.150 Firmware Slot Information 00:16:04.150 ========================= 00:16:04.150 Active slot: 1 00:16:04.150 Slot 1 Firmware Revision: 25.01 00:16:04.150 00:16:04.150 00:16:04.150 Commands Supported and Effects 00:16:04.150 ============================== 00:16:04.150 Admin Commands 00:16:04.150 -------------- 00:16:04.150 Get Log Page (02h): Supported 00:16:04.150 Identify (06h): Supported 00:16:04.150 Abort (08h): Supported 00:16:04.150 Set Features (09h): Supported 00:16:04.150 Get Features (0Ah): Supported 00:16:04.150 Asynchronous Event Request (0Ch): Supported 00:16:04.150 Keep Alive (18h): Supported 00:16:04.150 I/O Commands 00:16:04.150 ------------ 00:16:04.150 Flush (00h): Supported LBA-Change 00:16:04.150 Write (01h): Supported LBA-Change 00:16:04.150 Read (02h): Supported 00:16:04.150 Compare (05h): Supported 00:16:04.150 Write Zeroes (08h): Supported LBA-Change 00:16:04.150 Dataset Management (09h): Supported LBA-Change 00:16:04.150 Copy (19h): Supported LBA-Change 00:16:04.150 00:16:04.150 Error Log 00:16:04.150 ========= 00:16:04.150 00:16:04.150 Arbitration 00:16:04.150 =========== 00:16:04.150 Arbitration Burst: 1 00:16:04.150 00:16:04.150 Power Management 00:16:04.150 ================ 00:16:04.150 Number of Power States: 1 00:16:04.150 Current Power State: Power State #0 00:16:04.150 Power State #0: 00:16:04.150 Max Power: 0.00 W 00:16:04.150 Non-Operational State: Operational 00:16:04.150 Entry Latency: Not Reported 00:16:04.150 Exit Latency: Not Reported 00:16:04.150 Relative Read Throughput: 0 00:16:04.150 Relative Read Latency: 0 00:16:04.150 Relative Write Throughput: 0 00:16:04.150 Relative Write Latency: 0 00:16:04.150 Idle Power: Not Reported 00:16:04.150 Active Power: Not Reported 00:16:04.150 Non-Operational Permissive Mode: Not Supported 00:16:04.150 00:16:04.150 Health Information 00:16:04.150 ================== 00:16:04.150 Critical Warnings: 00:16:04.150 Available Spare Space: OK 00:16:04.150 Temperature: OK 00:16:04.150 Device Reliability: OK 00:16:04.150 Read Only: No 00:16:04.150 Volatile Memory Backup: OK 00:16:04.150 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:04.150 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:04.150 Available Spare: 0% 00:16:04.150 Available Sp[2024-11-25 12:50:43.834002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:04.150 [2024-11-25 12:50:43.841868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:04.150 [2024-11-25 12:50:43.841900] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:04.150 [2024-11-25 12:50:43.841916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.150 [2024-11-25 12:50:43.841924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.150 [2024-11-25 12:50:43.841931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.150 [2024-11-25 12:50:43.841938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.150 [2024-11-25 12:50:43.841985] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:04.150 [2024-11-25 12:50:43.841998] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:04.150 [2024-11-25 12:50:43.842989] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.151 [2024-11-25 12:50:43.843038] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:04.151 [2024-11-25 12:50:43.843047] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:04.151 [2024-11-25 12:50:43.843996] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:04.151 [2024-11-25 12:50:43.844008] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:04.151 [2024-11-25 12:50:43.844058] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:04.151 [2024-11-25 12:50:43.846869] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:04.151 are Threshold: 0% 00:16:04.151 Life Percentage Used: 0% 00:16:04.151 Data Units Read: 0 00:16:04.151 Data Units Written: 0 00:16:04.151 Host Read Commands: 0 00:16:04.151 Host Write Commands: 0 00:16:04.151 Controller Busy Time: 0 minutes 00:16:04.151 Power Cycles: 0 00:16:04.151 Power On Hours: 0 hours 00:16:04.151 Unsafe Shutdowns: 0 00:16:04.151 Unrecoverable Media Errors: 0 00:16:04.151 Lifetime Error Log Entries: 0 00:16:04.151 Warning Temperature Time: 0 minutes 00:16:04.151 Critical Temperature Time: 0 minutes 00:16:04.151 00:16:04.151 Number of Queues 00:16:04.151 ================ 00:16:04.151 Number of I/O Submission Queues: 127 00:16:04.151 Number of I/O Completion Queues: 127 00:16:04.151 00:16:04.151 Active Namespaces 00:16:04.151 ================= 00:16:04.151 Namespace ID:1 00:16:04.151 Error Recovery Timeout: Unlimited 00:16:04.151 Command Set Identifier: NVM (00h) 00:16:04.151 Deallocate: Supported 00:16:04.151 Deallocated/Unwritten Error: Not Supported 00:16:04.151 Deallocated Read Value: Unknown 00:16:04.151 Deallocate in Write Zeroes: Not Supported 00:16:04.151 Deallocated Guard Field: 0xFFFF 00:16:04.151 Flush: Supported 00:16:04.151 Reservation: Supported 00:16:04.151 Namespace Sharing Capabilities: Multiple Controllers 00:16:04.151 Size (in LBAs): 131072 (0GiB) 00:16:04.151 Capacity (in LBAs): 131072 (0GiB) 00:16:04.151 Utilization (in LBAs): 131072 (0GiB) 00:16:04.151 NGUID: 7A8E3DFECAE54A50B1FA8DFE3149358F 00:16:04.151 UUID: 7a8e3dfe-cae5-4a50-b1fa-8dfe3149358f 00:16:04.151 Thin Provisioning: Not Supported 00:16:04.151 Per-NS Atomic Units: Yes 00:16:04.151 Atomic Boundary Size (Normal): 0 00:16:04.151 Atomic Boundary Size (PFail): 0 00:16:04.151 Atomic Boundary Offset: 0 00:16:04.151 Maximum Single Source Range Length: 65535 00:16:04.151 Maximum Copy Length: 65535 00:16:04.151 Maximum Source Range Count: 1 00:16:04.151 NGUID/EUI64 Never Reused: No 00:16:04.151 Namespace Write Protected: No 00:16:04.151 Number of LBA Formats: 1 00:16:04.151 Current LBA Format: LBA Format #00 00:16:04.151 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.151 00:16:04.151 12:50:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:04.151 [2024-11-25 12:50:44.040925] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:09.440 Initializing NVMe Controllers 00:16:09.440 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:09.440 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:09.440 Initialization complete. Launching workers. 00:16:09.440 ======================================================== 00:16:09.440 Latency(us) 00:16:09.440 Device Information : IOPS MiB/s Average min max 00:16:09.440 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40037.36 156.40 3197.56 845.30 8341.70 00:16:09.440 ======================================================== 00:16:09.440 Total : 40037.36 156.40 3197.56 845.30 8341.70 00:16:09.440 00:16:09.440 [2024-11-25 12:50:49.142058] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:09.440 12:50:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:09.440 [2024-11-25 12:50:49.332613] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:14.728 Initializing NVMe Controllers 00:16:14.728 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:14.728 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:14.728 Initialization complete. Launching workers. 00:16:14.728 ======================================================== 00:16:14.728 Latency(us) 00:16:14.728 Device Information : IOPS MiB/s Average min max 00:16:14.728 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35249.86 137.69 3630.72 1106.58 7059.42 00:16:14.728 ======================================================== 00:16:14.728 Total : 35249.86 137.69 3630.72 1106.58 7059.42 00:16:14.728 00:16:14.728 [2024-11-25 12:50:54.354838] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:14.728 12:50:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:14.728 [2024-11-25 12:50:54.567255] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:20.014 [2024-11-25 12:50:59.709950] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.014 Initializing NVMe Controllers 00:16:20.014 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:20.014 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:20.014 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:20.014 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:20.014 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:20.014 Initialization complete. Launching workers. 00:16:20.014 Starting thread on core 2 00:16:20.014 Starting thread on core 3 00:16:20.014 Starting thread on core 1 00:16:20.014 12:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:20.273 [2024-11-25 12:51:00.004293] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:23.574 [2024-11-25 12:51:03.058014] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.574 Initializing NVMe Controllers 00:16:23.574 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.574 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.574 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:23.574 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:23.574 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:23.574 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:23.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:23.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:23.574 Initialization complete. Launching workers. 00:16:23.574 Starting thread on core 1 with urgent priority queue 00:16:23.574 Starting thread on core 2 with urgent priority queue 00:16:23.574 Starting thread on core 3 with urgent priority queue 00:16:23.574 Starting thread on core 0 with urgent priority queue 00:16:23.574 SPDK bdev Controller (SPDK2 ) core 0: 13132.67 IO/s 7.61 secs/100000 ios 00:16:23.574 SPDK bdev Controller (SPDK2 ) core 1: 13574.67 IO/s 7.37 secs/100000 ios 00:16:23.574 SPDK bdev Controller (SPDK2 ) core 2: 14992.33 IO/s 6.67 secs/100000 ios 00:16:23.574 SPDK bdev Controller (SPDK2 ) core 3: 9209.67 IO/s 10.86 secs/100000 ios 00:16:23.574 ======================================================== 00:16:23.574 00:16:23.574 12:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:23.574 [2024-11-25 12:51:03.355256] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:23.574 Initializing NVMe Controllers 00:16:23.574 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.574 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.574 Namespace ID: 1 size: 0GB 00:16:23.574 Initialization complete. 00:16:23.574 INFO: using host memory buffer for IO 00:16:23.574 Hello world! 00:16:23.574 [2024-11-25 12:51:03.365333] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.574 12:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:23.836 [2024-11-25 12:51:03.655136] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:25.223 Initializing NVMe Controllers 00:16:25.223 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.223 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.223 Initialization complete. Launching workers. 00:16:25.223 submit (in ns) avg, min, max = 7421.4, 3911.7, 7988996.7 00:16:25.223 complete (in ns) avg, min, max = 18549.0, 2370.0, 4000041.7 00:16:25.223 00:16:25.223 Submit histogram 00:16:25.223 ================ 00:16:25.223 Range in us Cumulative Count 00:16:25.223 3.893 - 3.920: 0.1937% ( 37) 00:16:25.223 3.920 - 3.947: 3.2096% ( 576) 00:16:25.223 3.947 - 3.973: 9.5659% ( 1214) 00:16:25.223 3.973 - 4.000: 19.1685% ( 1834) 00:16:25.223 4.000 - 4.027: 30.2738% ( 2121) 00:16:25.223 4.027 - 4.053: 42.1122% ( 2261) 00:16:25.223 4.053 - 4.080: 57.0972% ( 2862) 00:16:25.223 4.080 - 4.107: 74.3704% ( 3299) 00:16:25.223 4.107 - 4.133: 87.5962% ( 2526) 00:16:25.223 4.133 - 4.160: 94.7694% ( 1370) 00:16:25.223 4.160 - 4.187: 98.1203% ( 640) 00:16:25.223 4.187 - 4.213: 99.2146% ( 209) 00:16:25.223 4.213 - 4.240: 99.4188% ( 39) 00:16:25.223 4.240 - 4.267: 99.4659% ( 9) 00:16:25.223 4.267 - 4.293: 99.4816% ( 3) 00:16:25.223 4.293 - 4.320: 99.4869% ( 1) 00:16:25.223 4.453 - 4.480: 99.4921% ( 1) 00:16:25.223 5.440 - 5.467: 99.4974% ( 1) 00:16:25.223 5.520 - 5.547: 99.5026% ( 1) 00:16:25.223 5.840 - 5.867: 99.5078% ( 1) 00:16:25.223 5.867 - 5.893: 99.5131% ( 1) 00:16:25.223 6.000 - 6.027: 99.5288% ( 3) 00:16:25.223 6.053 - 6.080: 99.5340% ( 1) 00:16:25.223 6.107 - 6.133: 99.5445% ( 2) 00:16:25.223 6.133 - 6.160: 99.5602% ( 3) 00:16:25.223 6.160 - 6.187: 99.5811% ( 4) 00:16:25.223 6.187 - 6.213: 99.5968% ( 3) 00:16:25.223 6.213 - 6.240: 99.6073% ( 2) 00:16:25.223 6.240 - 6.267: 99.6230% ( 3) 00:16:25.223 6.267 - 6.293: 99.6335% ( 2) 00:16:25.223 6.293 - 6.320: 99.6387% ( 1) 00:16:25.223 6.320 - 6.347: 99.6440% ( 1) 00:16:25.223 6.373 - 6.400: 99.6597% ( 3) 00:16:25.223 6.453 - 6.480: 99.6649% ( 1) 00:16:25.223 6.480 - 6.507: 99.6701% ( 1) 00:16:25.223 6.507 - 6.533: 99.6806% ( 2) 00:16:25.223 6.560 - 6.587: 99.6911% ( 2) 00:16:25.223 6.587 - 6.613: 99.6963% ( 1) 00:16:25.223 6.613 - 6.640: 99.7173% ( 4) 00:16:25.223 6.640 - 6.667: 99.7382% ( 4) 00:16:25.223 6.667 - 6.693: 99.7539% ( 3) 00:16:25.223 6.773 - 6.800: 99.7644% ( 2) 00:16:25.223 6.827 - 6.880: 99.7696% ( 1) 00:16:25.223 6.880 - 6.933: 99.7749% ( 1) 00:16:25.223 6.933 - 6.987: 99.7853% ( 2) 00:16:25.223 6.987 - 7.040: 99.7958% ( 2) 00:16:25.223 7.040 - 7.093: 99.8063% ( 2) 00:16:25.223 7.093 - 7.147: 99.8115% ( 1) 00:16:25.223 7.200 - 7.253: 99.8220% ( 2) 00:16:25.223 7.253 - 7.307: 99.8325% ( 2) 00:16:25.223 7.307 - 7.360: 99.8377% ( 1) 00:16:25.223 7.413 - 7.467: 99.8482% ( 2) 00:16:25.223 7.520 - 7.573: 99.8691% ( 4) 00:16:25.223 7.680 - 7.733: 99.8743% ( 1) 00:16:25.223 7.787 - 7.840: 99.8796% ( 1) 00:16:25.223 7.893 - 7.947: 99.8848% ( 1) 00:16:25.223 7.947 - 8.000: 99.8900% ( 1) 00:16:25.223 8.107 - 8.160: 99.8953% ( 1) 00:16:25.223 8.267 - 8.320: 99.9005% ( 1) 00:16:25.223 9.067 - 9.120: 99.9058% ( 1) 00:16:25.223 9.280 - 9.333: 99.9110% ( 1) 00:16:25.223 11.680 - 11.733: 99.9162% ( 1) 00:16:25.223 52.693 - 52.907: 99.9215% ( 1) 00:16:25.223 3986.773 - 4014.080: 99.9948% ( 14) 00:16:25.223 7973.547 - 8028.160: 100.0000% ( 1) 00:16:25.223 00:16:25.223 Complete histogram 00:16:25.223 ================== 00:16:25.223 Range in us Cumulative Count 00:16:25.223 2.360 - 2.373: 0.0052% ( 1) 00:16:25.223 2.373 - 2.387: 0.0942% ( 17) 00:16:25.223 2.387 - 2.400: 0.8377% ( 142) 00:16:25.223 2.400 - 2.413: 0.9006% ( 12) 00:16:25.223 2.413 - 2.427: 1.0629% ( 31) 00:16:25.223 2.427 - 2.440: 1.2618% ( 38) 00:16:25.223 2.440 - 2.453: 47.2695% ( 8787) 00:16:25.223 2.453 - 2.467: 58.8146% ( 2205) 00:16:25.223 2.467 - 2.480: 71.0823% ( 2343) 00:16:25.223 2.480 - 2.493: 77.7842% ( 1280) 00:16:25.223 2.493 - 2.507: 80.8105% ( 578) 00:16:25.223 2.507 - 2.520: 84.5227% ( 709) 00:16:25.223 2.520 - [2024-11-25 12:51:04.753545] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:25.223 2.533: 90.3817% ( 1119) 00:16:25.223 2.533 - 2.547: 95.1516% ( 911) 00:16:25.223 2.547 - 2.560: 97.1569% ( 383) 00:16:25.223 2.560 - 2.573: 98.4816% ( 253) 00:16:25.223 2.573 - 2.587: 99.1361% ( 125) 00:16:25.223 2.587 - 2.600: 99.3560% ( 42) 00:16:25.223 2.600 - 2.613: 99.3717% ( 3) 00:16:25.223 2.640 - 2.653: 99.3769% ( 1) 00:16:25.223 4.533 - 4.560: 99.3874% ( 2) 00:16:25.223 4.587 - 4.613: 99.3926% ( 1) 00:16:25.223 4.613 - 4.640: 99.4031% ( 2) 00:16:25.223 4.640 - 4.667: 99.4083% ( 1) 00:16:25.223 4.667 - 4.693: 99.4241% ( 3) 00:16:25.223 4.693 - 4.720: 99.4293% ( 1) 00:16:25.223 4.720 - 4.747: 99.4345% ( 1) 00:16:25.223 4.747 - 4.773: 99.4398% ( 1) 00:16:25.223 4.773 - 4.800: 99.4450% ( 1) 00:16:25.223 4.800 - 4.827: 99.4555% ( 2) 00:16:25.223 4.853 - 4.880: 99.4607% ( 1) 00:16:25.223 4.933 - 4.960: 99.4659% ( 1) 00:16:25.223 4.987 - 5.013: 99.4712% ( 1) 00:16:25.223 5.067 - 5.093: 99.4764% ( 1) 00:16:25.223 5.093 - 5.120: 99.4816% ( 1) 00:16:25.223 5.120 - 5.147: 99.4921% ( 2) 00:16:25.223 5.147 - 5.173: 99.5078% ( 3) 00:16:25.223 5.280 - 5.307: 99.5131% ( 1) 00:16:25.223 5.360 - 5.387: 99.5235% ( 2) 00:16:25.223 5.387 - 5.413: 99.5288% ( 1) 00:16:25.223 5.573 - 5.600: 99.5340% ( 1) 00:16:25.223 5.627 - 5.653: 99.5392% ( 1) 00:16:25.223 5.680 - 5.707: 99.5445% ( 1) 00:16:25.223 5.707 - 5.733: 99.5497% ( 1) 00:16:25.223 5.733 - 5.760: 99.5550% ( 1) 00:16:25.223 5.787 - 5.813: 99.5602% ( 1) 00:16:25.223 5.947 - 5.973: 99.5654% ( 1) 00:16:25.223 6.240 - 6.267: 99.5707% ( 1) 00:16:25.223 7.947 - 8.000: 99.5759% ( 1) 00:16:25.223 11.093 - 11.147: 99.5811% ( 1) 00:16:25.223 11.680 - 11.733: 99.5864% ( 1) 00:16:25.223 13.760 - 13.867: 99.5916% ( 1) 00:16:25.223 33.280 - 33.493: 99.5968% ( 1) 00:16:25.223 3317.760 - 3331.413: 99.6021% ( 1) 00:16:25.223 3986.773 - 4014.080: 100.0000% ( 76) 00:16:25.223 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:25.224 [ 00:16:25.224 { 00:16:25.224 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:25.224 "subtype": "Discovery", 00:16:25.224 "listen_addresses": [], 00:16:25.224 "allow_any_host": true, 00:16:25.224 "hosts": [] 00:16:25.224 }, 00:16:25.224 { 00:16:25.224 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:25.224 "subtype": "NVMe", 00:16:25.224 "listen_addresses": [ 00:16:25.224 { 00:16:25.224 "trtype": "VFIOUSER", 00:16:25.224 "adrfam": "IPv4", 00:16:25.224 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:25.224 "trsvcid": "0" 00:16:25.224 } 00:16:25.224 ], 00:16:25.224 "allow_any_host": true, 00:16:25.224 "hosts": [], 00:16:25.224 "serial_number": "SPDK1", 00:16:25.224 "model_number": "SPDK bdev Controller", 00:16:25.224 "max_namespaces": 32, 00:16:25.224 "min_cntlid": 1, 00:16:25.224 "max_cntlid": 65519, 00:16:25.224 "namespaces": [ 00:16:25.224 { 00:16:25.224 "nsid": 1, 00:16:25.224 "bdev_name": "Malloc1", 00:16:25.224 "name": "Malloc1", 00:16:25.224 "nguid": "54D6027E1F724C2188E1E69266C435AB", 00:16:25.224 "uuid": "54d6027e-1f72-4c21-88e1-e69266c435ab" 00:16:25.224 }, 00:16:25.224 { 00:16:25.224 "nsid": 2, 00:16:25.224 "bdev_name": "Malloc3", 00:16:25.224 "name": "Malloc3", 00:16:25.224 "nguid": "18F5C08A01C64B63BFD1BE1527758424", 00:16:25.224 "uuid": "18f5c08a-01c6-4b63-bfd1-be1527758424" 00:16:25.224 } 00:16:25.224 ] 00:16:25.224 }, 00:16:25.224 { 00:16:25.224 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:25.224 "subtype": "NVMe", 00:16:25.224 "listen_addresses": [ 00:16:25.224 { 00:16:25.224 "trtype": "VFIOUSER", 00:16:25.224 "adrfam": "IPv4", 00:16:25.224 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:25.224 "trsvcid": "0" 00:16:25.224 } 00:16:25.224 ], 00:16:25.224 "allow_any_host": true, 00:16:25.224 "hosts": [], 00:16:25.224 "serial_number": "SPDK2", 00:16:25.224 "model_number": "SPDK bdev Controller", 00:16:25.224 "max_namespaces": 32, 00:16:25.224 "min_cntlid": 1, 00:16:25.224 "max_cntlid": 65519, 00:16:25.224 "namespaces": [ 00:16:25.224 { 00:16:25.224 "nsid": 1, 00:16:25.224 "bdev_name": "Malloc2", 00:16:25.224 "name": "Malloc2", 00:16:25.224 "nguid": "7A8E3DFECAE54A50B1FA8DFE3149358F", 00:16:25.224 "uuid": "7a8e3dfe-cae5-4a50-b1fa-8dfe3149358f" 00:16:25.224 } 00:16:25.224 ] 00:16:25.224 } 00:16:25.224 ] 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=589893 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:25.224 12:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:25.485 Malloc4 00:16:25.485 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:25.485 [2024-11-25 12:51:05.190920] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:25.485 [2024-11-25 12:51:05.321802] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:25.485 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:25.485 Asynchronous Event Request test 00:16:25.485 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.485 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.485 Registering asynchronous event callbacks... 00:16:25.485 Starting namespace attribute notice tests for all controllers... 00:16:25.485 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:25.485 aer_cb - Changed Namespace 00:16:25.485 Cleaning up... 00:16:25.746 [ 00:16:25.746 { 00:16:25.746 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:25.746 "subtype": "Discovery", 00:16:25.746 "listen_addresses": [], 00:16:25.746 "allow_any_host": true, 00:16:25.746 "hosts": [] 00:16:25.746 }, 00:16:25.746 { 00:16:25.746 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:25.746 "subtype": "NVMe", 00:16:25.746 "listen_addresses": [ 00:16:25.746 { 00:16:25.746 "trtype": "VFIOUSER", 00:16:25.746 "adrfam": "IPv4", 00:16:25.746 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:25.746 "trsvcid": "0" 00:16:25.746 } 00:16:25.746 ], 00:16:25.746 "allow_any_host": true, 00:16:25.746 "hosts": [], 00:16:25.746 "serial_number": "SPDK1", 00:16:25.746 "model_number": "SPDK bdev Controller", 00:16:25.746 "max_namespaces": 32, 00:16:25.746 "min_cntlid": 1, 00:16:25.746 "max_cntlid": 65519, 00:16:25.746 "namespaces": [ 00:16:25.746 { 00:16:25.746 "nsid": 1, 00:16:25.746 "bdev_name": "Malloc1", 00:16:25.746 "name": "Malloc1", 00:16:25.746 "nguid": "54D6027E1F724C2188E1E69266C435AB", 00:16:25.746 "uuid": "54d6027e-1f72-4c21-88e1-e69266c435ab" 00:16:25.746 }, 00:16:25.746 { 00:16:25.746 "nsid": 2, 00:16:25.746 "bdev_name": "Malloc3", 00:16:25.746 "name": "Malloc3", 00:16:25.746 "nguid": "18F5C08A01C64B63BFD1BE1527758424", 00:16:25.746 "uuid": "18f5c08a-01c6-4b63-bfd1-be1527758424" 00:16:25.746 } 00:16:25.746 ] 00:16:25.746 }, 00:16:25.746 { 00:16:25.746 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:25.746 "subtype": "NVMe", 00:16:25.746 "listen_addresses": [ 00:16:25.746 { 00:16:25.746 "trtype": "VFIOUSER", 00:16:25.746 "adrfam": "IPv4", 00:16:25.746 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:25.746 "trsvcid": "0" 00:16:25.746 } 00:16:25.746 ], 00:16:25.746 "allow_any_host": true, 00:16:25.746 "hosts": [], 00:16:25.746 "serial_number": "SPDK2", 00:16:25.746 "model_number": "SPDK bdev Controller", 00:16:25.746 "max_namespaces": 32, 00:16:25.746 "min_cntlid": 1, 00:16:25.746 "max_cntlid": 65519, 00:16:25.746 "namespaces": [ 00:16:25.747 { 00:16:25.747 "nsid": 1, 00:16:25.747 "bdev_name": "Malloc2", 00:16:25.747 "name": "Malloc2", 00:16:25.747 "nguid": "7A8E3DFECAE54A50B1FA8DFE3149358F", 00:16:25.747 "uuid": "7a8e3dfe-cae5-4a50-b1fa-8dfe3149358f" 00:16:25.747 }, 00:16:25.747 { 00:16:25.747 "nsid": 2, 00:16:25.747 "bdev_name": "Malloc4", 00:16:25.747 "name": "Malloc4", 00:16:25.747 "nguid": "0E8B89898BC74727A1307DAB2C3D78CE", 00:16:25.747 "uuid": "0e8b8989-8bc7-4727-a130-7dab2c3d78ce" 00:16:25.747 } 00:16:25.747 ] 00:16:25.747 } 00:16:25.747 ] 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 589893 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 580633 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 580633 ']' 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 580633 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 580633 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 580633' 00:16:25.747 killing process with pid 580633 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 580633 00:16:25.747 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 580633 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=590167 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 590167' 00:16:26.051 Process pid: 590167 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 590167 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 590167 ']' 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.051 12:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:26.051 [2024-11-25 12:51:05.820065] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:26.051 [2024-11-25 12:51:05.820993] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:16:26.051 [2024-11-25 12:51:05.821035] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.051 [2024-11-25 12:51:05.899408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.051 [2024-11-25 12:51:05.935092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.051 [2024-11-25 12:51:05.935128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.051 [2024-11-25 12:51:05.935137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.051 [2024-11-25 12:51:05.935144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.051 [2024-11-25 12:51:05.935150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.051 [2024-11-25 12:51:05.936895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.051 [2024-11-25 12:51:05.937134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.052 [2024-11-25 12:51:05.937134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.052 [2024-11-25 12:51:05.936970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.333 [2024-11-25 12:51:05.992293] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:26.333 [2024-11-25 12:51:05.992365] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:26.333 [2024-11-25 12:51:05.993370] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:26.333 [2024-11-25 12:51:05.994256] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:26.333 [2024-11-25 12:51:05.994332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:26.903 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.903 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:26.903 12:51:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:27.844 12:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:28.104 12:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:28.104 12:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:28.104 12:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:28.104 12:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:28.104 12:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:28.364 Malloc1 00:16:28.364 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:28.364 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:28.625 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:28.886 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:28.886 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:28.886 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:29.147 Malloc2 00:16:29.147 12:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:29.147 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:29.407 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 590167 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 590167 ']' 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 590167 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 590167 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 590167' 00:16:29.668 killing process with pid 590167 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 590167 00:16:29.668 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 590167 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:29.942 00:16:29.942 real 0m51.934s 00:16:29.942 user 3m19.031s 00:16:29.942 sys 0m2.814s 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:29.942 ************************************ 00:16:29.942 END TEST nvmf_vfio_user 00:16:29.942 ************************************ 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:29.942 ************************************ 00:16:29.942 START TEST nvmf_vfio_user_nvme_compliance 00:16:29.942 ************************************ 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:29.942 * Looking for test storage... 00:16:29.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:29.942 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.204 --rc genhtml_branch_coverage=1 00:16:30.204 --rc genhtml_function_coverage=1 00:16:30.204 --rc genhtml_legend=1 00:16:30.204 --rc geninfo_all_blocks=1 00:16:30.204 --rc geninfo_unexecuted_blocks=1 00:16:30.204 00:16:30.204 ' 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.204 --rc genhtml_branch_coverage=1 00:16:30.204 --rc genhtml_function_coverage=1 00:16:30.204 --rc genhtml_legend=1 00:16:30.204 --rc geninfo_all_blocks=1 00:16:30.204 --rc geninfo_unexecuted_blocks=1 00:16:30.204 00:16:30.204 ' 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.204 --rc genhtml_branch_coverage=1 00:16:30.204 --rc genhtml_function_coverage=1 00:16:30.204 --rc genhtml_legend=1 00:16:30.204 --rc geninfo_all_blocks=1 00:16:30.204 --rc geninfo_unexecuted_blocks=1 00:16:30.204 00:16:30.204 ' 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.204 --rc genhtml_branch_coverage=1 00:16:30.204 --rc genhtml_function_coverage=1 00:16:30.204 --rc genhtml_legend=1 00:16:30.204 --rc geninfo_all_blocks=1 00:16:30.204 --rc geninfo_unexecuted_blocks=1 00:16:30.204 00:16:30.204 ' 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.204 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=591292 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 591292' 00:16:30.205 Process pid: 591292 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 591292 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 591292 ']' 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.205 12:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:30.205 [2024-11-25 12:51:09.950553] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:16:30.205 [2024-11-25 12:51:09.950629] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.205 [2024-11-25 12:51:10.036454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:30.205 [2024-11-25 12:51:10.081934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.205 [2024-11-25 12:51:10.081975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.205 [2024-11-25 12:51:10.081984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.205 [2024-11-25 12:51:10.081990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.205 [2024-11-25 12:51:10.081996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.205 [2024-11-25 12:51:10.083476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.205 [2024-11-25 12:51:10.083602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.205 [2024-11-25 12:51:10.083604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.149 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.149 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:31.149 12:51:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.091 malloc0 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.091 12:51:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:32.091 00:16:32.091 00:16:32.091 CUnit - A unit testing framework for C - Version 2.1-3 00:16:32.091 http://cunit.sourceforge.net/ 00:16:32.091 00:16:32.091 00:16:32.091 Suite: nvme_compliance 00:16:32.352 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-25 12:51:12.038322] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.352 [2024-11-25 12:51:12.039664] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:32.352 [2024-11-25 12:51:12.039675] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:32.352 [2024-11-25 12:51:12.039679] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:32.352 [2024-11-25 12:51:12.041340] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.352 passed 00:16:32.352 Test: admin_identify_ctrlr_verify_fused ...[2024-11-25 12:51:12.136956] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.352 [2024-11-25 12:51:12.139970] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.352 passed 00:16:32.352 Test: admin_identify_ns ...[2024-11-25 12:51:12.239233] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.612 [2024-11-25 12:51:12.299873] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:32.612 [2024-11-25 12:51:12.307872] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:32.612 [2024-11-25 12:51:12.328985] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.612 passed 00:16:32.612 Test: admin_get_features_mandatory_features ...[2024-11-25 12:51:12.424043] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.612 [2024-11-25 12:51:12.427056] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.612 passed 00:16:32.872 Test: admin_get_features_optional_features ...[2024-11-25 12:51:12.522621] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.872 [2024-11-25 12:51:12.525631] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.872 passed 00:16:32.872 Test: admin_set_features_number_of_queues ...[2024-11-25 12:51:12.620439] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:32.872 [2024-11-25 12:51:12.728965] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:32.872 passed 00:16:33.133 Test: admin_get_log_page_mandatory_logs ...[2024-11-25 12:51:12.822633] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.133 [2024-11-25 12:51:12.825651] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.133 passed 00:16:33.133 Test: admin_get_log_page_with_lpo ...[2024-11-25 12:51:12.917982] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.133 [2024-11-25 12:51:12.985907] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:33.133 [2024-11-25 12:51:12.999921] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.393 passed 00:16:33.393 Test: fabric_property_get ...[2024-11-25 12:51:13.095221] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.393 [2024-11-25 12:51:13.096467] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:33.393 [2024-11-25 12:51:13.098232] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.393 passed 00:16:33.393 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-25 12:51:13.194808] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.393 [2024-11-25 12:51:13.196057] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:33.393 [2024-11-25 12:51:13.197826] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.393 passed 00:16:33.393 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-25 12:51:13.291801] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.653 [2024-11-25 12:51:13.376880] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:33.653 [2024-11-25 12:51:13.392871] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:33.653 [2024-11-25 12:51:13.397959] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.653 passed 00:16:33.653 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-25 12:51:13.491592] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.653 [2024-11-25 12:51:13.492844] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:33.653 [2024-11-25 12:51:13.494613] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.653 passed 00:16:33.913 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-25 12:51:13.587795] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.913 [2024-11-25 12:51:13.662871] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:33.913 [2024-11-25 12:51:13.686869] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:33.914 [2024-11-25 12:51:13.691997] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.914 passed 00:16:33.914 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-25 12:51:13.785989] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.914 [2024-11-25 12:51:13.787237] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:33.914 [2024-11-25 12:51:13.787256] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:33.914 [2024-11-25 12:51:13.789007] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.173 passed 00:16:34.173 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-25 12:51:13.882109] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.173 [2024-11-25 12:51:13.973875] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:34.173 [2024-11-25 12:51:13.981867] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:34.173 [2024-11-25 12:51:13.989874] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:34.173 [2024-11-25 12:51:13.997869] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:34.173 [2024-11-25 12:51:14.029982] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.173 passed 00:16:34.433 Test: admin_create_io_sq_verify_pc ...[2024-11-25 12:51:14.121589] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.433 [2024-11-25 12:51:14.139883] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:34.433 [2024-11-25 12:51:14.157771] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.433 passed 00:16:34.433 Test: admin_create_io_qp_max_qps ...[2024-11-25 12:51:14.252298] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:35.817 [2024-11-25 12:51:15.356878] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:36.076 [2024-11-25 12:51:15.743493] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.076 passed 00:16:36.076 Test: admin_create_io_sq_shared_cq ...[2024-11-25 12:51:15.836479] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.076 [2024-11-25 12:51:15.967870] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:36.336 [2024-11-25 12:51:16.004930] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.336 passed 00:16:36.336 00:16:36.336 Run Summary: Type Total Ran Passed Failed Inactive 00:16:36.336 suites 1 1 n/a 0 0 00:16:36.336 tests 18 18 18 0 0 00:16:36.336 asserts 360 360 360 0 n/a 00:16:36.336 00:16:36.336 Elapsed time = 1.667 seconds 00:16:36.336 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 591292 00:16:36.336 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 591292 ']' 00:16:36.336 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 591292 00:16:36.336 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:36.336 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.336 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 591292 00:16:36.336 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.336 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.336 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 591292' 00:16:36.336 killing process with pid 591292 00:16:36.336 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 591292 00:16:36.336 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 591292 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:36.596 00:16:36.596 real 0m6.606s 00:16:36.596 user 0m18.676s 00:16:36.596 sys 0m0.565s 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:36.596 ************************************ 00:16:36.596 END TEST nvmf_vfio_user_nvme_compliance 00:16:36.596 ************************************ 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:36.596 ************************************ 00:16:36.596 START TEST nvmf_vfio_user_fuzz 00:16:36.596 ************************************ 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:36.596 * Looking for test storage... 00:16:36.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:36.596 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:36.857 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:36.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.858 --rc genhtml_branch_coverage=1 00:16:36.858 --rc genhtml_function_coverage=1 00:16:36.858 --rc genhtml_legend=1 00:16:36.858 --rc geninfo_all_blocks=1 00:16:36.858 --rc geninfo_unexecuted_blocks=1 00:16:36.858 00:16:36.858 ' 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:36.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.858 --rc genhtml_branch_coverage=1 00:16:36.858 --rc genhtml_function_coverage=1 00:16:36.858 --rc genhtml_legend=1 00:16:36.858 --rc geninfo_all_blocks=1 00:16:36.858 --rc geninfo_unexecuted_blocks=1 00:16:36.858 00:16:36.858 ' 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:36.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.858 --rc genhtml_branch_coverage=1 00:16:36.858 --rc genhtml_function_coverage=1 00:16:36.858 --rc genhtml_legend=1 00:16:36.858 --rc geninfo_all_blocks=1 00:16:36.858 --rc geninfo_unexecuted_blocks=1 00:16:36.858 00:16:36.858 ' 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:36.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.858 --rc genhtml_branch_coverage=1 00:16:36.858 --rc genhtml_function_coverage=1 00:16:36.858 --rc genhtml_legend=1 00:16:36.858 --rc geninfo_all_blocks=1 00:16:36.858 --rc geninfo_unexecuted_blocks=1 00:16:36.858 00:16:36.858 ' 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:36.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:36.858 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=592776 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 592776' 00:16:36.859 Process pid: 592776 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 592776 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 592776 ']' 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:36.859 12:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:37.800 12:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.800 12:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:37.800 12:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.741 malloc0 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:38.741 12:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:10.857 Fuzzing completed. Shutting down the fuzz application 00:17:10.857 00:17:10.857 Dumping successful admin opcodes: 00:17:10.857 8, 9, 10, 24, 00:17:10.857 Dumping successful io opcodes: 00:17:10.857 0, 00:17:10.857 NS: 0x20000081ef00 I/O qp, Total commands completed: 1117408, total successful commands: 4399, random_seed: 3246283328 00:17:10.857 NS: 0x20000081ef00 admin qp, Total commands completed: 140514, total successful commands: 1140, random_seed: 4031042240 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 592776 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 592776 ']' 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 592776 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 592776 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 592776' 00:17:10.857 killing process with pid 592776 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 592776 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 592776 00:17:10.857 12:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:10.857 00:17:10.857 real 0m33.724s 00:17:10.857 user 0m37.753s 00:17:10.857 sys 0m26.360s 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:10.857 ************************************ 00:17:10.857 END TEST nvmf_vfio_user_fuzz 00:17:10.857 ************************************ 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.857 ************************************ 00:17:10.857 START TEST nvmf_auth_target 00:17:10.857 ************************************ 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:10.857 * Looking for test storage... 00:17:10.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.857 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.858 --rc genhtml_branch_coverage=1 00:17:10.858 --rc genhtml_function_coverage=1 00:17:10.858 --rc genhtml_legend=1 00:17:10.858 --rc geninfo_all_blocks=1 00:17:10.858 --rc geninfo_unexecuted_blocks=1 00:17:10.858 00:17:10.858 ' 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.858 --rc genhtml_branch_coverage=1 00:17:10.858 --rc genhtml_function_coverage=1 00:17:10.858 --rc genhtml_legend=1 00:17:10.858 --rc geninfo_all_blocks=1 00:17:10.858 --rc geninfo_unexecuted_blocks=1 00:17:10.858 00:17:10.858 ' 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.858 --rc genhtml_branch_coverage=1 00:17:10.858 --rc genhtml_function_coverage=1 00:17:10.858 --rc genhtml_legend=1 00:17:10.858 --rc geninfo_all_blocks=1 00:17:10.858 --rc geninfo_unexecuted_blocks=1 00:17:10.858 00:17:10.858 ' 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.858 --rc genhtml_branch_coverage=1 00:17:10.858 --rc genhtml_function_coverage=1 00:17:10.858 --rc genhtml_legend=1 00:17:10.858 --rc geninfo_all_blocks=1 00:17:10.858 --rc geninfo_unexecuted_blocks=1 00:17:10.858 00:17:10.858 ' 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.858 12:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.007 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.007 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:19.007 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:19.007 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:19.007 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:19.007 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:19.007 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:19.007 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:19.008 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:19.008 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:19.008 Found net devices under 0000:31:00.0: cvl_0_0 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:19.008 Found net devices under 0000:31:00.1: cvl_0_1 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:19.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:17:19.008 00:17:19.008 --- 10.0.0.2 ping statistics --- 00:17:19.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.008 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:17:19.008 00:17:19.008 --- 10.0.0.1 ping statistics --- 00:17:19.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.008 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:19.008 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=603513 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 603513 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 603513 ']' 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.009 12:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=603811 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=349267f358e03c80e597ac8fb65fd0e16f2426dca10e0f7c 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ivV 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 349267f358e03c80e597ac8fb65fd0e16f2426dca10e0f7c 0 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 349267f358e03c80e597ac8fb65fd0e16f2426dca10e0f7c 0 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=349267f358e03c80e597ac8fb65fd0e16f2426dca10e0f7c 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ivV 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ivV 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.ivV 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=402438489c5d5aafac11cf7348f119d5f6a37ced600aa2c79aa0105b44f3d225 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.url 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 402438489c5d5aafac11cf7348f119d5f6a37ced600aa2c79aa0105b44f3d225 3 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 402438489c5d5aafac11cf7348f119d5f6a37ced600aa2c79aa0105b44f3d225 3 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=402438489c5d5aafac11cf7348f119d5f6a37ced600aa2c79aa0105b44f3d225 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.url 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.url 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.url 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e1aeeb3180248abc20de3c939c5e64a4 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.fxb 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e1aeeb3180248abc20de3c939c5e64a4 1 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e1aeeb3180248abc20de3c939c5e64a4 1 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e1aeeb3180248abc20de3c939c5e64a4 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:19.950 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.fxb 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.fxb 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.fxb 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eebd11d3d4932bc74d87d31b1627589629f8454d61330a98 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.f8J 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eebd11d3d4932bc74d87d31b1627589629f8454d61330a98 2 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eebd11d3d4932bc74d87d31b1627589629f8454d61330a98 2 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eebd11d3d4932bc74d87d31b1627589629f8454d61330a98 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.f8J 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.f8J 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.f8J 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=18f8693e4b1e221d175c02b246d0389bcbf00ed5336ff0e0 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.KGC 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 18f8693e4b1e221d175c02b246d0389bcbf00ed5336ff0e0 2 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 18f8693e4b1e221d175c02b246d0389bcbf00ed5336ff0e0 2 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=18f8693e4b1e221d175c02b246d0389bcbf00ed5336ff0e0 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:20.212 12:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.KGC 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.KGC 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.KGC 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ab74fe88489c60254b3d71128a53648f 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oAi 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ab74fe88489c60254b3d71128a53648f 1 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ab74fe88489c60254b3d71128a53648f 1 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ab74fe88489c60254b3d71128a53648f 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:20.212 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oAi 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oAi 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.oAi 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dfead695e89c5ca8091076385aa69bed3f249217834cf03a9f2a664ff7ac2921 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.XGG 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dfead695e89c5ca8091076385aa69bed3f249217834cf03a9f2a664ff7ac2921 3 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dfead695e89c5ca8091076385aa69bed3f249217834cf03a9f2a664ff7ac2921 3 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dfead695e89c5ca8091076385aa69bed3f249217834cf03a9f2a664ff7ac2921 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:20.213 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:20.474 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.XGG 00:17:20.474 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.XGG 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.XGG 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 603513 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 603513 ']' 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 603811 /var/tmp/host.sock 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 603811 ']' 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:20.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.475 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ivV 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ivV 00:17:20.736 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ivV 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.url ]] 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.url 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.url 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.url 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fxb 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.fxb 00:17:20.996 12:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.fxb 00:17:21.256 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.f8J ]] 00:17:21.256 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.f8J 00:17:21.256 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.256 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.256 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.256 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.f8J 00:17:21.256 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.f8J 00:17:21.517 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:21.517 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KGC 00:17:21.517 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.517 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.517 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.517 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.KGC 00:17:21.517 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.KGC 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.oAi ]] 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oAi 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oAi 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oAi 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.XGG 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.XGG 00:17:21.777 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.XGG 00:17:22.038 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:22.038 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:22.038 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.038 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.038 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.038 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.299 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:22.300 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.300 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.300 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.300 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.300 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.300 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.300 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.300 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.300 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.300 12:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.300 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.300 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.561 00:17:22.561 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.561 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.561 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.561 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.561 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.561 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.561 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.561 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.561 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.561 { 00:17:22.561 "cntlid": 1, 00:17:22.561 "qid": 0, 00:17:22.561 "state": "enabled", 00:17:22.561 "thread": "nvmf_tgt_poll_group_000", 00:17:22.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:22.561 "listen_address": { 00:17:22.561 "trtype": "TCP", 00:17:22.561 "adrfam": "IPv4", 00:17:22.561 "traddr": "10.0.0.2", 00:17:22.561 "trsvcid": "4420" 00:17:22.561 }, 00:17:22.561 "peer_address": { 00:17:22.561 "trtype": "TCP", 00:17:22.561 "adrfam": "IPv4", 00:17:22.561 "traddr": "10.0.0.1", 00:17:22.561 "trsvcid": "56030" 00:17:22.561 }, 00:17:22.561 "auth": { 00:17:22.561 "state": "completed", 00:17:22.561 "digest": "sha256", 00:17:22.561 "dhgroup": "null" 00:17:22.561 } 00:17:22.561 } 00:17:22.561 ]' 00:17:22.561 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.822 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.822 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.822 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.822 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.822 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.822 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.822 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.083 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:23.083 12:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:23.655 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.655 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.655 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.655 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.655 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.655 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.655 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:23.656 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.916 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.177 00:17:24.177 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.177 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.177 12:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.439 { 00:17:24.439 "cntlid": 3, 00:17:24.439 "qid": 0, 00:17:24.439 "state": "enabled", 00:17:24.439 "thread": "nvmf_tgt_poll_group_000", 00:17:24.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:24.439 "listen_address": { 00:17:24.439 "trtype": "TCP", 00:17:24.439 "adrfam": "IPv4", 00:17:24.439 "traddr": "10.0.0.2", 00:17:24.439 "trsvcid": "4420" 00:17:24.439 }, 00:17:24.439 "peer_address": { 00:17:24.439 "trtype": "TCP", 00:17:24.439 "adrfam": "IPv4", 00:17:24.439 "traddr": "10.0.0.1", 00:17:24.439 "trsvcid": "56058" 00:17:24.439 }, 00:17:24.439 "auth": { 00:17:24.439 "state": "completed", 00:17:24.439 "digest": "sha256", 00:17:24.439 "dhgroup": "null" 00:17:24.439 } 00:17:24.439 } 00:17:24.439 ]' 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.439 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.700 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:17:24.700 12:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:17:25.642 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.643 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.903 00:17:25.903 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.903 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.903 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.163 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.163 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.163 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.163 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.163 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.164 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.164 { 00:17:26.164 "cntlid": 5, 00:17:26.164 "qid": 0, 00:17:26.164 "state": "enabled", 00:17:26.164 "thread": "nvmf_tgt_poll_group_000", 00:17:26.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:26.164 "listen_address": { 00:17:26.164 "trtype": "TCP", 00:17:26.164 "adrfam": "IPv4", 00:17:26.164 "traddr": "10.0.0.2", 00:17:26.164 "trsvcid": "4420" 00:17:26.164 }, 00:17:26.164 "peer_address": { 00:17:26.164 "trtype": "TCP", 00:17:26.164 "adrfam": "IPv4", 00:17:26.164 "traddr": "10.0.0.1", 00:17:26.164 "trsvcid": "56082" 00:17:26.164 }, 00:17:26.164 "auth": { 00:17:26.164 "state": "completed", 00:17:26.164 "digest": "sha256", 00:17:26.164 "dhgroup": "null" 00:17:26.164 } 00:17:26.164 } 00:17:26.164 ]' 00:17:26.164 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.164 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.164 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.164 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:26.164 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.164 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.164 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.164 12:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.425 12:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:17:26.425 12:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:17:27.367 12:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.367 12:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.367 12:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.367 12:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.367 12:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.367 12:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.367 12:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:27.367 12:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.367 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.627 00:17:27.627 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.627 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.627 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.627 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.888 { 00:17:27.888 "cntlid": 7, 00:17:27.888 "qid": 0, 00:17:27.888 "state": "enabled", 00:17:27.888 "thread": "nvmf_tgt_poll_group_000", 00:17:27.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:27.888 "listen_address": { 00:17:27.888 "trtype": "TCP", 00:17:27.888 "adrfam": "IPv4", 00:17:27.888 "traddr": "10.0.0.2", 00:17:27.888 "trsvcid": "4420" 00:17:27.888 }, 00:17:27.888 "peer_address": { 00:17:27.888 "trtype": "TCP", 00:17:27.888 "adrfam": "IPv4", 00:17:27.888 "traddr": "10.0.0.1", 00:17:27.888 "trsvcid": "56098" 00:17:27.888 }, 00:17:27.888 "auth": { 00:17:27.888 "state": "completed", 00:17:27.888 "digest": "sha256", 00:17:27.888 "dhgroup": "null" 00:17:27.888 } 00:17:27.888 } 00:17:27.888 ]' 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.888 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.186 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:17:28.186 12:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:17:28.806 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.806 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.806 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.806 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.806 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.806 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.806 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.806 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:28.806 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.066 12:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.327 00:17:29.327 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.327 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.327 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.586 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.587 { 00:17:29.587 "cntlid": 9, 00:17:29.587 "qid": 0, 00:17:29.587 "state": "enabled", 00:17:29.587 "thread": "nvmf_tgt_poll_group_000", 00:17:29.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:29.587 "listen_address": { 00:17:29.587 "trtype": "TCP", 00:17:29.587 "adrfam": "IPv4", 00:17:29.587 "traddr": "10.0.0.2", 00:17:29.587 "trsvcid": "4420" 00:17:29.587 }, 00:17:29.587 "peer_address": { 00:17:29.587 "trtype": "TCP", 00:17:29.587 "adrfam": "IPv4", 00:17:29.587 "traddr": "10.0.0.1", 00:17:29.587 "trsvcid": "56114" 00:17:29.587 }, 00:17:29.587 "auth": { 00:17:29.587 "state": "completed", 00:17:29.587 "digest": "sha256", 00:17:29.587 "dhgroup": "ffdhe2048" 00:17:29.587 } 00:17:29.587 } 00:17:29.587 ]' 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.587 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.847 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:29.847 12:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.788 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.049 00:17:31.049 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.049 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.049 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.311 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.311 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.311 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.311 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.311 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.311 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.311 { 00:17:31.311 "cntlid": 11, 00:17:31.311 "qid": 0, 00:17:31.311 "state": "enabled", 00:17:31.311 "thread": "nvmf_tgt_poll_group_000", 00:17:31.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:31.311 "listen_address": { 00:17:31.311 "trtype": "TCP", 00:17:31.311 "adrfam": "IPv4", 00:17:31.311 "traddr": "10.0.0.2", 00:17:31.311 "trsvcid": "4420" 00:17:31.311 }, 00:17:31.311 "peer_address": { 00:17:31.311 "trtype": "TCP", 00:17:31.311 "adrfam": "IPv4", 00:17:31.311 "traddr": "10.0.0.1", 00:17:31.311 "trsvcid": "56154" 00:17:31.311 }, 00:17:31.311 "auth": { 00:17:31.311 "state": "completed", 00:17:31.311 "digest": "sha256", 00:17:31.311 "dhgroup": "ffdhe2048" 00:17:31.311 } 00:17:31.311 } 00:17:31.311 ]' 00:17:31.311 12:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.311 12:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.311 12:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.311 12:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.311 12:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.311 12:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.311 12:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.311 12:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.572 12:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:17:31.572 12:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:17:32.143 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.403 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:32.403 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.403 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.403 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.403 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.403 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:32.403 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.664 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.925 00:17:32.925 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.925 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.925 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.925 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.925 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.925 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.925 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.925 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.925 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.925 { 00:17:32.925 "cntlid": 13, 00:17:32.925 "qid": 0, 00:17:32.925 "state": "enabled", 00:17:32.925 "thread": "nvmf_tgt_poll_group_000", 00:17:32.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:32.925 "listen_address": { 00:17:32.925 "trtype": "TCP", 00:17:32.925 "adrfam": "IPv4", 00:17:32.925 "traddr": "10.0.0.2", 00:17:32.925 "trsvcid": "4420" 00:17:32.925 }, 00:17:32.925 "peer_address": { 00:17:32.925 "trtype": "TCP", 00:17:32.925 "adrfam": "IPv4", 00:17:32.925 "traddr": "10.0.0.1", 00:17:32.925 "trsvcid": "47990" 00:17:32.925 }, 00:17:32.925 "auth": { 00:17:32.925 "state": "completed", 00:17:32.925 "digest": "sha256", 00:17:32.925 "dhgroup": "ffdhe2048" 00:17:32.925 } 00:17:32.925 } 00:17:32.925 ]' 00:17:32.926 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.187 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.187 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.187 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.187 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.187 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.187 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.187 12:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.448 12:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:17:33.448 12:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:17:34.019 12:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.019 12:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:34.019 12:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.019 12:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.019 12:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.019 12:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.019 12:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:34.019 12:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.279 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.540 00:17:34.540 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.540 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.540 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.801 { 00:17:34.801 "cntlid": 15, 00:17:34.801 "qid": 0, 00:17:34.801 "state": "enabled", 00:17:34.801 "thread": "nvmf_tgt_poll_group_000", 00:17:34.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:34.801 "listen_address": { 00:17:34.801 "trtype": "TCP", 00:17:34.801 "adrfam": "IPv4", 00:17:34.801 "traddr": "10.0.0.2", 00:17:34.801 "trsvcid": "4420" 00:17:34.801 }, 00:17:34.801 "peer_address": { 00:17:34.801 "trtype": "TCP", 00:17:34.801 "adrfam": "IPv4", 00:17:34.801 "traddr": "10.0.0.1", 00:17:34.801 "trsvcid": "48022" 00:17:34.801 }, 00:17:34.801 "auth": { 00:17:34.801 "state": "completed", 00:17:34.801 "digest": "sha256", 00:17:34.801 "dhgroup": "ffdhe2048" 00:17:34.801 } 00:17:34.801 } 00:17:34.801 ]' 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.801 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.061 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:17:35.061 12:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.003 12:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.264 00:17:36.264 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.264 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.264 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.525 { 00:17:36.525 "cntlid": 17, 00:17:36.525 "qid": 0, 00:17:36.525 "state": "enabled", 00:17:36.525 "thread": "nvmf_tgt_poll_group_000", 00:17:36.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:36.525 "listen_address": { 00:17:36.525 "trtype": "TCP", 00:17:36.525 "adrfam": "IPv4", 00:17:36.525 "traddr": "10.0.0.2", 00:17:36.525 "trsvcid": "4420" 00:17:36.525 }, 00:17:36.525 "peer_address": { 00:17:36.525 "trtype": "TCP", 00:17:36.525 "adrfam": "IPv4", 00:17:36.525 "traddr": "10.0.0.1", 00:17:36.525 "trsvcid": "48036" 00:17:36.525 }, 00:17:36.525 "auth": { 00:17:36.525 "state": "completed", 00:17:36.525 "digest": "sha256", 00:17:36.525 "dhgroup": "ffdhe3072" 00:17:36.525 } 00:17:36.525 } 00:17:36.525 ]' 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.525 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.787 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:36.787 12:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.728 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.991 00:17:37.991 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.991 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.991 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.254 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.254 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.254 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.254 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.254 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.254 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.254 { 00:17:38.254 "cntlid": 19, 00:17:38.254 "qid": 0, 00:17:38.254 "state": "enabled", 00:17:38.254 "thread": "nvmf_tgt_poll_group_000", 00:17:38.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:38.254 "listen_address": { 00:17:38.254 "trtype": "TCP", 00:17:38.254 "adrfam": "IPv4", 00:17:38.254 "traddr": "10.0.0.2", 00:17:38.254 "trsvcid": "4420" 00:17:38.254 }, 00:17:38.254 "peer_address": { 00:17:38.254 "trtype": "TCP", 00:17:38.254 "adrfam": "IPv4", 00:17:38.254 "traddr": "10.0.0.1", 00:17:38.254 "trsvcid": "48054" 00:17:38.254 }, 00:17:38.254 "auth": { 00:17:38.254 "state": "completed", 00:17:38.254 "digest": "sha256", 00:17:38.254 "dhgroup": "ffdhe3072" 00:17:38.254 } 00:17:38.254 } 00:17:38.254 ]' 00:17:38.254 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.254 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.254 12:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.254 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.254 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.254 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.254 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.254 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.515 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:17:38.515 12:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:17:39.456 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.456 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:39.456 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.456 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.456 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.456 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.456 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.457 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.717 00:17:39.717 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.717 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.717 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.978 { 00:17:39.978 "cntlid": 21, 00:17:39.978 "qid": 0, 00:17:39.978 "state": "enabled", 00:17:39.978 "thread": "nvmf_tgt_poll_group_000", 00:17:39.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:39.978 "listen_address": { 00:17:39.978 "trtype": "TCP", 00:17:39.978 "adrfam": "IPv4", 00:17:39.978 "traddr": "10.0.0.2", 00:17:39.978 "trsvcid": "4420" 00:17:39.978 }, 00:17:39.978 "peer_address": { 00:17:39.978 "trtype": "TCP", 00:17:39.978 "adrfam": "IPv4", 00:17:39.978 "traddr": "10.0.0.1", 00:17:39.978 "trsvcid": "48078" 00:17:39.978 }, 00:17:39.978 "auth": { 00:17:39.978 "state": "completed", 00:17:39.978 "digest": "sha256", 00:17:39.978 "dhgroup": "ffdhe3072" 00:17:39.978 } 00:17:39.978 } 00:17:39.978 ]' 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.978 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.238 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:17:40.238 12:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:17:41.179 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.179 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.180 12:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.441 00:17:41.441 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.441 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.441 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.703 { 00:17:41.703 "cntlid": 23, 00:17:41.703 "qid": 0, 00:17:41.703 "state": "enabled", 00:17:41.703 "thread": "nvmf_tgt_poll_group_000", 00:17:41.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:41.703 "listen_address": { 00:17:41.703 "trtype": "TCP", 00:17:41.703 "adrfam": "IPv4", 00:17:41.703 "traddr": "10.0.0.2", 00:17:41.703 "trsvcid": "4420" 00:17:41.703 }, 00:17:41.703 "peer_address": { 00:17:41.703 "trtype": "TCP", 00:17:41.703 "adrfam": "IPv4", 00:17:41.703 "traddr": "10.0.0.1", 00:17:41.703 "trsvcid": "48106" 00:17:41.703 }, 00:17:41.703 "auth": { 00:17:41.703 "state": "completed", 00:17:41.703 "digest": "sha256", 00:17:41.703 "dhgroup": "ffdhe3072" 00:17:41.703 } 00:17:41.703 } 00:17:41.703 ]' 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.703 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.704 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.965 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:17:41.965 12:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:17:42.907 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.907 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.908 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.169 00:17:43.169 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.169 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.169 12:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.429 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.429 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.429 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.429 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.430 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.430 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.430 { 00:17:43.430 "cntlid": 25, 00:17:43.430 "qid": 0, 00:17:43.430 "state": "enabled", 00:17:43.430 "thread": "nvmf_tgt_poll_group_000", 00:17:43.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:43.430 "listen_address": { 00:17:43.430 "trtype": "TCP", 00:17:43.430 "adrfam": "IPv4", 00:17:43.430 "traddr": "10.0.0.2", 00:17:43.430 "trsvcid": "4420" 00:17:43.430 }, 00:17:43.430 "peer_address": { 00:17:43.430 "trtype": "TCP", 00:17:43.430 "adrfam": "IPv4", 00:17:43.430 "traddr": "10.0.0.1", 00:17:43.430 "trsvcid": "40054" 00:17:43.430 }, 00:17:43.430 "auth": { 00:17:43.430 "state": "completed", 00:17:43.430 "digest": "sha256", 00:17:43.430 "dhgroup": "ffdhe4096" 00:17:43.430 } 00:17:43.430 } 00:17:43.430 ]' 00:17:43.430 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.430 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.430 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.430 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.430 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.430 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.430 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.430 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.691 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:43.691 12:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.633 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.895 00:17:44.895 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.895 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.895 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.157 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.157 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.157 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.157 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.157 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.157 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.157 { 00:17:45.157 "cntlid": 27, 00:17:45.157 "qid": 0, 00:17:45.157 "state": "enabled", 00:17:45.157 "thread": "nvmf_tgt_poll_group_000", 00:17:45.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:45.157 "listen_address": { 00:17:45.157 "trtype": "TCP", 00:17:45.157 "adrfam": "IPv4", 00:17:45.157 "traddr": "10.0.0.2", 00:17:45.157 "trsvcid": "4420" 00:17:45.157 }, 00:17:45.157 "peer_address": { 00:17:45.157 "trtype": "TCP", 00:17:45.157 "adrfam": "IPv4", 00:17:45.157 "traddr": "10.0.0.1", 00:17:45.157 "trsvcid": "40080" 00:17:45.157 }, 00:17:45.157 "auth": { 00:17:45.157 "state": "completed", 00:17:45.157 "digest": "sha256", 00:17:45.157 "dhgroup": "ffdhe4096" 00:17:45.157 } 00:17:45.157 } 00:17:45.157 ]' 00:17:45.157 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.157 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.157 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.157 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.157 12:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.157 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.157 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.157 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.418 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:17:45.418 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:17:46.202 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.202 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:46.202 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.202 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.202 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.202 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.202 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:46.202 12:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:46.462 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:46.462 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.462 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:46.462 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:46.462 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.463 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.463 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.463 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.463 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.463 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.463 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.463 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.463 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.723 00:17:46.723 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.723 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.723 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.723 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.723 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.723 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.723 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.984 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.984 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.984 { 00:17:46.984 "cntlid": 29, 00:17:46.984 "qid": 0, 00:17:46.984 "state": "enabled", 00:17:46.984 "thread": "nvmf_tgt_poll_group_000", 00:17:46.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:46.984 "listen_address": { 00:17:46.984 "trtype": "TCP", 00:17:46.984 "adrfam": "IPv4", 00:17:46.984 "traddr": "10.0.0.2", 00:17:46.984 "trsvcid": "4420" 00:17:46.984 }, 00:17:46.984 "peer_address": { 00:17:46.984 "trtype": "TCP", 00:17:46.984 "adrfam": "IPv4", 00:17:46.984 "traddr": "10.0.0.1", 00:17:46.984 "trsvcid": "40116" 00:17:46.984 }, 00:17:46.984 "auth": { 00:17:46.984 "state": "completed", 00:17:46.984 "digest": "sha256", 00:17:46.984 "dhgroup": "ffdhe4096" 00:17:46.984 } 00:17:46.984 } 00:17:46.984 ]' 00:17:46.984 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.984 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.984 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.985 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:46.985 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.985 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.985 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.985 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.245 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:17:47.245 12:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:17:47.816 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.076 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.077 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.077 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.077 12:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.338 00:17:48.338 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.338 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.338 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.599 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.599 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.599 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.599 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.599 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.599 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.599 { 00:17:48.599 "cntlid": 31, 00:17:48.599 "qid": 0, 00:17:48.599 "state": "enabled", 00:17:48.599 "thread": "nvmf_tgt_poll_group_000", 00:17:48.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:48.599 "listen_address": { 00:17:48.599 "trtype": "TCP", 00:17:48.599 "adrfam": "IPv4", 00:17:48.599 "traddr": "10.0.0.2", 00:17:48.599 "trsvcid": "4420" 00:17:48.599 }, 00:17:48.599 "peer_address": { 00:17:48.599 "trtype": "TCP", 00:17:48.599 "adrfam": "IPv4", 00:17:48.599 "traddr": "10.0.0.1", 00:17:48.599 "trsvcid": "40130" 00:17:48.599 }, 00:17:48.599 "auth": { 00:17:48.599 "state": "completed", 00:17:48.599 "digest": "sha256", 00:17:48.599 "dhgroup": "ffdhe4096" 00:17:48.599 } 00:17:48.599 } 00:17:48.599 ]' 00:17:48.599 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.599 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.599 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.599 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.599 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.860 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.860 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.860 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.860 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:17:48.860 12:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.803 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.064 00:17:50.333 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.333 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.333 12:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.333 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.333 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.333 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.333 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.333 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.333 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.333 { 00:17:50.333 "cntlid": 33, 00:17:50.333 "qid": 0, 00:17:50.333 "state": "enabled", 00:17:50.333 "thread": "nvmf_tgt_poll_group_000", 00:17:50.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:50.333 "listen_address": { 00:17:50.333 "trtype": "TCP", 00:17:50.333 "adrfam": "IPv4", 00:17:50.333 "traddr": "10.0.0.2", 00:17:50.333 "trsvcid": "4420" 00:17:50.333 }, 00:17:50.333 "peer_address": { 00:17:50.333 "trtype": "TCP", 00:17:50.333 "adrfam": "IPv4", 00:17:50.333 "traddr": "10.0.0.1", 00:17:50.333 "trsvcid": "40164" 00:17:50.333 }, 00:17:50.333 "auth": { 00:17:50.333 "state": "completed", 00:17:50.333 "digest": "sha256", 00:17:50.333 "dhgroup": "ffdhe6144" 00:17:50.333 } 00:17:50.333 } 00:17:50.333 ]' 00:17:50.334 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.334 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.334 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.594 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.594 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.594 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.594 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.594 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.594 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:50.594 12:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.535 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.106 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.106 { 00:17:52.106 "cntlid": 35, 00:17:52.106 "qid": 0, 00:17:52.106 "state": "enabled", 00:17:52.106 "thread": "nvmf_tgt_poll_group_000", 00:17:52.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:52.106 "listen_address": { 00:17:52.106 "trtype": "TCP", 00:17:52.106 "adrfam": "IPv4", 00:17:52.106 "traddr": "10.0.0.2", 00:17:52.106 "trsvcid": "4420" 00:17:52.106 }, 00:17:52.106 "peer_address": { 00:17:52.106 "trtype": "TCP", 00:17:52.106 "adrfam": "IPv4", 00:17:52.106 "traddr": "10.0.0.1", 00:17:52.106 "trsvcid": "40190" 00:17:52.106 }, 00:17:52.106 "auth": { 00:17:52.106 "state": "completed", 00:17:52.106 "digest": "sha256", 00:17:52.106 "dhgroup": "ffdhe6144" 00:17:52.106 } 00:17:52.106 } 00:17:52.106 ]' 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.106 12:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.367 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.367 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.367 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.367 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.367 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.367 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:17:52.367 12:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:17:53.307 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.307 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:53.307 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.307 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.307 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.307 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.307 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.307 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.569 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.831 00:17:53.831 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.831 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.831 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.092 { 00:17:54.092 "cntlid": 37, 00:17:54.092 "qid": 0, 00:17:54.092 "state": "enabled", 00:17:54.092 "thread": "nvmf_tgt_poll_group_000", 00:17:54.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:54.092 "listen_address": { 00:17:54.092 "trtype": "TCP", 00:17:54.092 "adrfam": "IPv4", 00:17:54.092 "traddr": "10.0.0.2", 00:17:54.092 "trsvcid": "4420" 00:17:54.092 }, 00:17:54.092 "peer_address": { 00:17:54.092 "trtype": "TCP", 00:17:54.092 "adrfam": "IPv4", 00:17:54.092 "traddr": "10.0.0.1", 00:17:54.092 "trsvcid": "58658" 00:17:54.092 }, 00:17:54.092 "auth": { 00:17:54.092 "state": "completed", 00:17:54.092 "digest": "sha256", 00:17:54.092 "dhgroup": "ffdhe6144" 00:17:54.092 } 00:17:54.092 } 00:17:54.092 ]' 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.092 12:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.353 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:17:54.353 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:17:55.296 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.296 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:55.296 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.296 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.296 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.296 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.296 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:55.296 12:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.296 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.558 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.819 { 00:17:55.819 "cntlid": 39, 00:17:55.819 "qid": 0, 00:17:55.819 "state": "enabled", 00:17:55.819 "thread": "nvmf_tgt_poll_group_000", 00:17:55.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:55.819 "listen_address": { 00:17:55.819 "trtype": "TCP", 00:17:55.819 "adrfam": "IPv4", 00:17:55.819 "traddr": "10.0.0.2", 00:17:55.819 "trsvcid": "4420" 00:17:55.819 }, 00:17:55.819 "peer_address": { 00:17:55.819 "trtype": "TCP", 00:17:55.819 "adrfam": "IPv4", 00:17:55.819 "traddr": "10.0.0.1", 00:17:55.819 "trsvcid": "58694" 00:17:55.819 }, 00:17:55.819 "auth": { 00:17:55.819 "state": "completed", 00:17:55.819 "digest": "sha256", 00:17:55.819 "dhgroup": "ffdhe6144" 00:17:55.819 } 00:17:55.819 } 00:17:55.819 ]' 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.819 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.080 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.080 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.080 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.080 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.080 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.080 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:17:56.080 12:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.022 12:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.595 00:17:57.595 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.595 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.595 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.856 { 00:17:57.856 "cntlid": 41, 00:17:57.856 "qid": 0, 00:17:57.856 "state": "enabled", 00:17:57.856 "thread": "nvmf_tgt_poll_group_000", 00:17:57.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:57.856 "listen_address": { 00:17:57.856 "trtype": "TCP", 00:17:57.856 "adrfam": "IPv4", 00:17:57.856 "traddr": "10.0.0.2", 00:17:57.856 "trsvcid": "4420" 00:17:57.856 }, 00:17:57.856 "peer_address": { 00:17:57.856 "trtype": "TCP", 00:17:57.856 "adrfam": "IPv4", 00:17:57.856 "traddr": "10.0.0.1", 00:17:57.856 "trsvcid": "58720" 00:17:57.856 }, 00:17:57.856 "auth": { 00:17:57.856 "state": "completed", 00:17:57.856 "digest": "sha256", 00:17:57.856 "dhgroup": "ffdhe8192" 00:17:57.856 } 00:17:57.856 } 00:17:57.856 ]' 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.856 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.117 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:58.117 12:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.059 12:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.631 00:17:59.631 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.631 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.631 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.893 { 00:17:59.893 "cntlid": 43, 00:17:59.893 "qid": 0, 00:17:59.893 "state": "enabled", 00:17:59.893 "thread": "nvmf_tgt_poll_group_000", 00:17:59.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:59.893 "listen_address": { 00:17:59.893 "trtype": "TCP", 00:17:59.893 "adrfam": "IPv4", 00:17:59.893 "traddr": "10.0.0.2", 00:17:59.893 "trsvcid": "4420" 00:17:59.893 }, 00:17:59.893 "peer_address": { 00:17:59.893 "trtype": "TCP", 00:17:59.893 "adrfam": "IPv4", 00:17:59.893 "traddr": "10.0.0.1", 00:17:59.893 "trsvcid": "58742" 00:17:59.893 }, 00:17:59.893 "auth": { 00:17:59.893 "state": "completed", 00:17:59.893 "digest": "sha256", 00:17:59.893 "dhgroup": "ffdhe8192" 00:17:59.893 } 00:17:59.893 } 00:17:59.893 ]' 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.893 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.154 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:00.154 12:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.097 12:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.670 00:18:01.670 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.670 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.670 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.931 { 00:18:01.931 "cntlid": 45, 00:18:01.931 "qid": 0, 00:18:01.931 "state": "enabled", 00:18:01.931 "thread": "nvmf_tgt_poll_group_000", 00:18:01.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:01.931 "listen_address": { 00:18:01.931 "trtype": "TCP", 00:18:01.931 "adrfam": "IPv4", 00:18:01.931 "traddr": "10.0.0.2", 00:18:01.931 "trsvcid": "4420" 00:18:01.931 }, 00:18:01.931 "peer_address": { 00:18:01.931 "trtype": "TCP", 00:18:01.931 "adrfam": "IPv4", 00:18:01.931 "traddr": "10.0.0.1", 00:18:01.931 "trsvcid": "58784" 00:18:01.931 }, 00:18:01.931 "auth": { 00:18:01.931 "state": "completed", 00:18:01.931 "digest": "sha256", 00:18:01.931 "dhgroup": "ffdhe8192" 00:18:01.931 } 00:18:01.931 } 00:18:01.931 ]' 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.931 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.193 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:02.193 12:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.135 12:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.707 00:18:03.707 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.707 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.707 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.968 { 00:18:03.968 "cntlid": 47, 00:18:03.968 "qid": 0, 00:18:03.968 "state": "enabled", 00:18:03.968 "thread": "nvmf_tgt_poll_group_000", 00:18:03.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:03.968 "listen_address": { 00:18:03.968 "trtype": "TCP", 00:18:03.968 "adrfam": "IPv4", 00:18:03.968 "traddr": "10.0.0.2", 00:18:03.968 "trsvcid": "4420" 00:18:03.968 }, 00:18:03.968 "peer_address": { 00:18:03.968 "trtype": "TCP", 00:18:03.968 "adrfam": "IPv4", 00:18:03.968 "traddr": "10.0.0.1", 00:18:03.968 "trsvcid": "52498" 00:18:03.968 }, 00:18:03.968 "auth": { 00:18:03.968 "state": "completed", 00:18:03.968 "digest": "sha256", 00:18:03.968 "dhgroup": "ffdhe8192" 00:18:03.968 } 00:18:03.968 } 00:18:03.968 ]' 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.968 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.228 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:04.229 12:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:04.799 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.799 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.799 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.799 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.799 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.799 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:04.799 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.799 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.799 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:04.799 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.059 12:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.319 00:18:05.319 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.319 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.319 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.579 { 00:18:05.579 "cntlid": 49, 00:18:05.579 "qid": 0, 00:18:05.579 "state": "enabled", 00:18:05.579 "thread": "nvmf_tgt_poll_group_000", 00:18:05.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:05.579 "listen_address": { 00:18:05.579 "trtype": "TCP", 00:18:05.579 "adrfam": "IPv4", 00:18:05.579 "traddr": "10.0.0.2", 00:18:05.579 "trsvcid": "4420" 00:18:05.579 }, 00:18:05.579 "peer_address": { 00:18:05.579 "trtype": "TCP", 00:18:05.579 "adrfam": "IPv4", 00:18:05.579 "traddr": "10.0.0.1", 00:18:05.579 "trsvcid": "52520" 00:18:05.579 }, 00:18:05.579 "auth": { 00:18:05.579 "state": "completed", 00:18:05.579 "digest": "sha384", 00:18:05.579 "dhgroup": "null" 00:18:05.579 } 00:18:05.579 } 00:18:05.579 ]' 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.579 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.840 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:05.840 12:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.779 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.039 00:18:07.039 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.039 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.039 12:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.300 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.300 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.300 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.300 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.301 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.301 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.301 { 00:18:07.301 "cntlid": 51, 00:18:07.301 "qid": 0, 00:18:07.301 "state": "enabled", 00:18:07.301 "thread": "nvmf_tgt_poll_group_000", 00:18:07.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:07.301 "listen_address": { 00:18:07.301 "trtype": "TCP", 00:18:07.301 "adrfam": "IPv4", 00:18:07.301 "traddr": "10.0.0.2", 00:18:07.301 "trsvcid": "4420" 00:18:07.301 }, 00:18:07.301 "peer_address": { 00:18:07.301 "trtype": "TCP", 00:18:07.301 "adrfam": "IPv4", 00:18:07.301 "traddr": "10.0.0.1", 00:18:07.301 "trsvcid": "52530" 00:18:07.301 }, 00:18:07.301 "auth": { 00:18:07.301 "state": "completed", 00:18:07.301 "digest": "sha384", 00:18:07.301 "dhgroup": "null" 00:18:07.301 } 00:18:07.301 } 00:18:07.301 ]' 00:18:07.301 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.301 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.301 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.301 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:07.301 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.301 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.301 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.301 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.563 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:07.563 12:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.502 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.762 00:18:08.762 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.762 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.762 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.022 { 00:18:09.022 "cntlid": 53, 00:18:09.022 "qid": 0, 00:18:09.022 "state": "enabled", 00:18:09.022 "thread": "nvmf_tgt_poll_group_000", 00:18:09.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:09.022 "listen_address": { 00:18:09.022 "trtype": "TCP", 00:18:09.022 "adrfam": "IPv4", 00:18:09.022 "traddr": "10.0.0.2", 00:18:09.022 "trsvcid": "4420" 00:18:09.022 }, 00:18:09.022 "peer_address": { 00:18:09.022 "trtype": "TCP", 00:18:09.022 "adrfam": "IPv4", 00:18:09.022 "traddr": "10.0.0.1", 00:18:09.022 "trsvcid": "52558" 00:18:09.022 }, 00:18:09.022 "auth": { 00:18:09.022 "state": "completed", 00:18:09.022 "digest": "sha384", 00:18:09.022 "dhgroup": "null" 00:18:09.022 } 00:18:09.022 } 00:18:09.022 ]' 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.022 12:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.284 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:09.284 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.225 12:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.484 00:18:10.484 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.484 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.484 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.745 { 00:18:10.745 "cntlid": 55, 00:18:10.745 "qid": 0, 00:18:10.745 "state": "enabled", 00:18:10.745 "thread": "nvmf_tgt_poll_group_000", 00:18:10.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:10.745 "listen_address": { 00:18:10.745 "trtype": "TCP", 00:18:10.745 "adrfam": "IPv4", 00:18:10.745 "traddr": "10.0.0.2", 00:18:10.745 "trsvcid": "4420" 00:18:10.745 }, 00:18:10.745 "peer_address": { 00:18:10.745 "trtype": "TCP", 00:18:10.745 "adrfam": "IPv4", 00:18:10.745 "traddr": "10.0.0.1", 00:18:10.745 "trsvcid": "52600" 00:18:10.745 }, 00:18:10.745 "auth": { 00:18:10.745 "state": "completed", 00:18:10.745 "digest": "sha384", 00:18:10.745 "dhgroup": "null" 00:18:10.745 } 00:18:10.745 } 00:18:10.745 ]' 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.745 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.005 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:11.005 12:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:11.577 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.838 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.098 00:18:12.098 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.098 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.098 12:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.359 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.359 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.359 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.359 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.359 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.359 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.359 { 00:18:12.359 "cntlid": 57, 00:18:12.359 "qid": 0, 00:18:12.359 "state": "enabled", 00:18:12.359 "thread": "nvmf_tgt_poll_group_000", 00:18:12.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:12.359 "listen_address": { 00:18:12.359 "trtype": "TCP", 00:18:12.359 "adrfam": "IPv4", 00:18:12.359 "traddr": "10.0.0.2", 00:18:12.359 "trsvcid": "4420" 00:18:12.359 }, 00:18:12.359 "peer_address": { 00:18:12.359 "trtype": "TCP", 00:18:12.359 "adrfam": "IPv4", 00:18:12.359 "traddr": "10.0.0.1", 00:18:12.359 "trsvcid": "49354" 00:18:12.359 }, 00:18:12.359 "auth": { 00:18:12.359 "state": "completed", 00:18:12.359 "digest": "sha384", 00:18:12.359 "dhgroup": "ffdhe2048" 00:18:12.359 } 00:18:12.359 } 00:18:12.359 ]' 00:18:12.359 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.359 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.359 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.359 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:12.359 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.619 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.619 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.619 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.619 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:12.619 12:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.561 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.823 00:18:13.823 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.823 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.823 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.085 { 00:18:14.085 "cntlid": 59, 00:18:14.085 "qid": 0, 00:18:14.085 "state": "enabled", 00:18:14.085 "thread": "nvmf_tgt_poll_group_000", 00:18:14.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:14.085 "listen_address": { 00:18:14.085 "trtype": "TCP", 00:18:14.085 "adrfam": "IPv4", 00:18:14.085 "traddr": "10.0.0.2", 00:18:14.085 "trsvcid": "4420" 00:18:14.085 }, 00:18:14.085 "peer_address": { 00:18:14.085 "trtype": "TCP", 00:18:14.085 "adrfam": "IPv4", 00:18:14.085 "traddr": "10.0.0.1", 00:18:14.085 "trsvcid": "49376" 00:18:14.085 }, 00:18:14.085 "auth": { 00:18:14.085 "state": "completed", 00:18:14.085 "digest": "sha384", 00:18:14.085 "dhgroup": "ffdhe2048" 00:18:14.085 } 00:18:14.085 } 00:18:14.085 ]' 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.085 12:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.346 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:14.346 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:15.286 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.286 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.286 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.286 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.286 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.286 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.286 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.286 12:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.287 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.547 00:18:15.547 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.547 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.548 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.809 { 00:18:15.809 "cntlid": 61, 00:18:15.809 "qid": 0, 00:18:15.809 "state": "enabled", 00:18:15.809 "thread": "nvmf_tgt_poll_group_000", 00:18:15.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:15.809 "listen_address": { 00:18:15.809 "trtype": "TCP", 00:18:15.809 "adrfam": "IPv4", 00:18:15.809 "traddr": "10.0.0.2", 00:18:15.809 "trsvcid": "4420" 00:18:15.809 }, 00:18:15.809 "peer_address": { 00:18:15.809 "trtype": "TCP", 00:18:15.809 "adrfam": "IPv4", 00:18:15.809 "traddr": "10.0.0.1", 00:18:15.809 "trsvcid": "49406" 00:18:15.809 }, 00:18:15.809 "auth": { 00:18:15.809 "state": "completed", 00:18:15.809 "digest": "sha384", 00:18:15.809 "dhgroup": "ffdhe2048" 00:18:15.809 } 00:18:15.809 } 00:18:15.809 ]' 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.809 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.072 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:16.072 12:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.015 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.016 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:17.016 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.016 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.016 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.016 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.016 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.016 12:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.277 00:18:17.277 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.277 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.277 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.537 { 00:18:17.537 "cntlid": 63, 00:18:17.537 "qid": 0, 00:18:17.537 "state": "enabled", 00:18:17.537 "thread": "nvmf_tgt_poll_group_000", 00:18:17.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:17.537 "listen_address": { 00:18:17.537 "trtype": "TCP", 00:18:17.537 "adrfam": "IPv4", 00:18:17.537 "traddr": "10.0.0.2", 00:18:17.537 "trsvcid": "4420" 00:18:17.537 }, 00:18:17.537 "peer_address": { 00:18:17.537 "trtype": "TCP", 00:18:17.537 "adrfam": "IPv4", 00:18:17.537 "traddr": "10.0.0.1", 00:18:17.537 "trsvcid": "49430" 00:18:17.537 }, 00:18:17.537 "auth": { 00:18:17.537 "state": "completed", 00:18:17.537 "digest": "sha384", 00:18:17.537 "dhgroup": "ffdhe2048" 00:18:17.537 } 00:18:17.537 } 00:18:17.537 ]' 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.537 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.798 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:17.798 12:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.742 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.003 00:18:19.003 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.003 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.003 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.264 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.264 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.264 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.264 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.264 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.264 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.264 { 00:18:19.264 "cntlid": 65, 00:18:19.264 "qid": 0, 00:18:19.264 "state": "enabled", 00:18:19.264 "thread": "nvmf_tgt_poll_group_000", 00:18:19.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:19.264 "listen_address": { 00:18:19.264 "trtype": "TCP", 00:18:19.264 "adrfam": "IPv4", 00:18:19.264 "traddr": "10.0.0.2", 00:18:19.264 "trsvcid": "4420" 00:18:19.264 }, 00:18:19.264 "peer_address": { 00:18:19.264 "trtype": "TCP", 00:18:19.264 "adrfam": "IPv4", 00:18:19.264 "traddr": "10.0.0.1", 00:18:19.264 "trsvcid": "49470" 00:18:19.264 }, 00:18:19.264 "auth": { 00:18:19.264 "state": "completed", 00:18:19.264 "digest": "sha384", 00:18:19.264 "dhgroup": "ffdhe3072" 00:18:19.264 } 00:18:19.264 } 00:18:19.264 ]' 00:18:19.264 12:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.264 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.264 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.264 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.264 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.264 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.264 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.264 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.524 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:19.524 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:20.095 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.095 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.095 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.095 12:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.356 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.616 00:18:20.616 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.616 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.616 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.877 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.877 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.877 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.877 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.877 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.877 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.877 { 00:18:20.877 "cntlid": 67, 00:18:20.877 "qid": 0, 00:18:20.877 "state": "enabled", 00:18:20.877 "thread": "nvmf_tgt_poll_group_000", 00:18:20.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:20.877 "listen_address": { 00:18:20.877 "trtype": "TCP", 00:18:20.877 "adrfam": "IPv4", 00:18:20.877 "traddr": "10.0.0.2", 00:18:20.877 "trsvcid": "4420" 00:18:20.877 }, 00:18:20.877 "peer_address": { 00:18:20.877 "trtype": "TCP", 00:18:20.877 "adrfam": "IPv4", 00:18:20.877 "traddr": "10.0.0.1", 00:18:20.877 "trsvcid": "49486" 00:18:20.877 }, 00:18:20.877 "auth": { 00:18:20.877 "state": "completed", 00:18:20.877 "digest": "sha384", 00:18:20.877 "dhgroup": "ffdhe3072" 00:18:20.877 } 00:18:20.877 } 00:18:20.877 ]' 00:18:20.877 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.877 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.877 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.877 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:20.877 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.136 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.136 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.136 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.136 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:21.137 12:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.080 12:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.340 00:18:22.340 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.340 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.340 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.600 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.600 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.600 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.600 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.600 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.600 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.600 { 00:18:22.600 "cntlid": 69, 00:18:22.600 "qid": 0, 00:18:22.600 "state": "enabled", 00:18:22.600 "thread": "nvmf_tgt_poll_group_000", 00:18:22.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:22.600 "listen_address": { 00:18:22.600 "trtype": "TCP", 00:18:22.600 "adrfam": "IPv4", 00:18:22.600 "traddr": "10.0.0.2", 00:18:22.600 "trsvcid": "4420" 00:18:22.600 }, 00:18:22.600 "peer_address": { 00:18:22.600 "trtype": "TCP", 00:18:22.600 "adrfam": "IPv4", 00:18:22.600 "traddr": "10.0.0.1", 00:18:22.600 "trsvcid": "50370" 00:18:22.600 }, 00:18:22.600 "auth": { 00:18:22.600 "state": "completed", 00:18:22.600 "digest": "sha384", 00:18:22.600 "dhgroup": "ffdhe3072" 00:18:22.600 } 00:18:22.600 } 00:18:22.600 ]' 00:18:22.600 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.600 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.600 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.600 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.600 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.859 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.859 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.859 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.859 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:22.859 12:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.801 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.062 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.062 00:18:24.062 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.062 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.062 12:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.322 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.322 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.322 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.322 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.322 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.322 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.322 { 00:18:24.322 "cntlid": 71, 00:18:24.322 "qid": 0, 00:18:24.322 "state": "enabled", 00:18:24.322 "thread": "nvmf_tgt_poll_group_000", 00:18:24.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:24.322 "listen_address": { 00:18:24.322 "trtype": "TCP", 00:18:24.322 "adrfam": "IPv4", 00:18:24.322 "traddr": "10.0.0.2", 00:18:24.322 "trsvcid": "4420" 00:18:24.322 }, 00:18:24.322 "peer_address": { 00:18:24.322 "trtype": "TCP", 00:18:24.322 "adrfam": "IPv4", 00:18:24.322 "traddr": "10.0.0.1", 00:18:24.322 "trsvcid": "50382" 00:18:24.322 }, 00:18:24.322 "auth": { 00:18:24.322 "state": "completed", 00:18:24.322 "digest": "sha384", 00:18:24.322 "dhgroup": "ffdhe3072" 00:18:24.322 } 00:18:24.322 } 00:18:24.322 ]' 00:18:24.322 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.322 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.322 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.584 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.584 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.584 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.584 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.584 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.584 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:24.584 12:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.525 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.784 00:18:25.784 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.784 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.784 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.043 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.043 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.043 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.043 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.043 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.043 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.043 { 00:18:26.043 "cntlid": 73, 00:18:26.043 "qid": 0, 00:18:26.043 "state": "enabled", 00:18:26.043 "thread": "nvmf_tgt_poll_group_000", 00:18:26.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:26.043 "listen_address": { 00:18:26.043 "trtype": "TCP", 00:18:26.043 "adrfam": "IPv4", 00:18:26.043 "traddr": "10.0.0.2", 00:18:26.043 "trsvcid": "4420" 00:18:26.043 }, 00:18:26.043 "peer_address": { 00:18:26.043 "trtype": "TCP", 00:18:26.043 "adrfam": "IPv4", 00:18:26.043 "traddr": "10.0.0.1", 00:18:26.043 "trsvcid": "50412" 00:18:26.043 }, 00:18:26.043 "auth": { 00:18:26.043 "state": "completed", 00:18:26.043 "digest": "sha384", 00:18:26.043 "dhgroup": "ffdhe4096" 00:18:26.043 } 00:18:26.043 } 00:18:26.043 ]' 00:18:26.043 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.043 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.043 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.303 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:26.303 12:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.303 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.303 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.303 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.303 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:26.303 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:27.243 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.243 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.243 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.243 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.243 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.243 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.243 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:27.243 12:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.504 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.765 00:18:27.765 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.765 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.765 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.765 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.765 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.765 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.765 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.765 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.765 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.765 { 00:18:27.765 "cntlid": 75, 00:18:27.765 "qid": 0, 00:18:27.765 "state": "enabled", 00:18:27.765 "thread": "nvmf_tgt_poll_group_000", 00:18:27.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:27.765 "listen_address": { 00:18:27.765 "trtype": "TCP", 00:18:27.765 "adrfam": "IPv4", 00:18:27.765 "traddr": "10.0.0.2", 00:18:27.765 "trsvcid": "4420" 00:18:27.765 }, 00:18:27.765 "peer_address": { 00:18:27.765 "trtype": "TCP", 00:18:27.765 "adrfam": "IPv4", 00:18:27.765 "traddr": "10.0.0.1", 00:18:27.765 "trsvcid": "50450" 00:18:27.765 }, 00:18:27.765 "auth": { 00:18:27.765 "state": "completed", 00:18:27.765 "digest": "sha384", 00:18:27.765 "dhgroup": "ffdhe4096" 00:18:27.765 } 00:18:27.765 } 00:18:27.765 ]' 00:18:27.765 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.027 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.027 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.027 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:28.027 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.027 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.027 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.027 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.287 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:28.287 12:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:28.860 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.860 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:28.860 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.860 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.860 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.860 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.860 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:28.860 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:29.120 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:29.120 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.120 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:29.120 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:29.120 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:29.120 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.120 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.121 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.121 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.121 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.121 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.121 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.121 12:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.381 00:18:29.381 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.381 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.382 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.643 { 00:18:29.643 "cntlid": 77, 00:18:29.643 "qid": 0, 00:18:29.643 "state": "enabled", 00:18:29.643 "thread": "nvmf_tgt_poll_group_000", 00:18:29.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:29.643 "listen_address": { 00:18:29.643 "trtype": "TCP", 00:18:29.643 "adrfam": "IPv4", 00:18:29.643 "traddr": "10.0.0.2", 00:18:29.643 "trsvcid": "4420" 00:18:29.643 }, 00:18:29.643 "peer_address": { 00:18:29.643 "trtype": "TCP", 00:18:29.643 "adrfam": "IPv4", 00:18:29.643 "traddr": "10.0.0.1", 00:18:29.643 "trsvcid": "50464" 00:18:29.643 }, 00:18:29.643 "auth": { 00:18:29.643 "state": "completed", 00:18:29.643 "digest": "sha384", 00:18:29.643 "dhgroup": "ffdhe4096" 00:18:29.643 } 00:18:29.643 } 00:18:29.643 ]' 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.643 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.904 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:29.904 12:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.845 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.106 00:18:31.106 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.106 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.106 12:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.366 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.366 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.366 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.366 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.366 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.366 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.366 { 00:18:31.366 "cntlid": 79, 00:18:31.366 "qid": 0, 00:18:31.366 "state": "enabled", 00:18:31.366 "thread": "nvmf_tgt_poll_group_000", 00:18:31.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:31.366 "listen_address": { 00:18:31.366 "trtype": "TCP", 00:18:31.366 "adrfam": "IPv4", 00:18:31.366 "traddr": "10.0.0.2", 00:18:31.366 "trsvcid": "4420" 00:18:31.366 }, 00:18:31.366 "peer_address": { 00:18:31.366 "trtype": "TCP", 00:18:31.366 "adrfam": "IPv4", 00:18:31.366 "traddr": "10.0.0.1", 00:18:31.366 "trsvcid": "50484" 00:18:31.366 }, 00:18:31.366 "auth": { 00:18:31.366 "state": "completed", 00:18:31.366 "digest": "sha384", 00:18:31.366 "dhgroup": "ffdhe4096" 00:18:31.366 } 00:18:31.366 } 00:18:31.366 ]' 00:18:31.366 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.366 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.366 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.366 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.366 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.628 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.628 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.628 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.628 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:31.628 12:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.568 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.138 00:18:33.138 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.138 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.138 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.138 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.138 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.138 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.138 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.138 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.138 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.138 { 00:18:33.138 "cntlid": 81, 00:18:33.138 "qid": 0, 00:18:33.138 "state": "enabled", 00:18:33.138 "thread": "nvmf_tgt_poll_group_000", 00:18:33.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:33.138 "listen_address": { 00:18:33.138 "trtype": "TCP", 00:18:33.138 "adrfam": "IPv4", 00:18:33.138 "traddr": "10.0.0.2", 00:18:33.138 "trsvcid": "4420" 00:18:33.138 }, 00:18:33.138 "peer_address": { 00:18:33.138 "trtype": "TCP", 00:18:33.138 "adrfam": "IPv4", 00:18:33.138 "traddr": "10.0.0.1", 00:18:33.138 "trsvcid": "59372" 00:18:33.138 }, 00:18:33.138 "auth": { 00:18:33.138 "state": "completed", 00:18:33.138 "digest": "sha384", 00:18:33.138 "dhgroup": "ffdhe6144" 00:18:33.138 } 00:18:33.138 } 00:18:33.138 ]' 00:18:33.138 12:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.138 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.138 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.399 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:33.399 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.399 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.399 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.399 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.399 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:33.399 12:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:34.344 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.344 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.344 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.344 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.344 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.344 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.344 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:34.344 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.606 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.866 00:18:34.866 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.866 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.866 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.127 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.127 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.127 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.127 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.127 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.128 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.128 { 00:18:35.128 "cntlid": 83, 00:18:35.128 "qid": 0, 00:18:35.128 "state": "enabled", 00:18:35.128 "thread": "nvmf_tgt_poll_group_000", 00:18:35.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:35.128 "listen_address": { 00:18:35.128 "trtype": "TCP", 00:18:35.128 "adrfam": "IPv4", 00:18:35.128 "traddr": "10.0.0.2", 00:18:35.128 "trsvcid": "4420" 00:18:35.128 }, 00:18:35.128 "peer_address": { 00:18:35.128 "trtype": "TCP", 00:18:35.128 "adrfam": "IPv4", 00:18:35.128 "traddr": "10.0.0.1", 00:18:35.128 "trsvcid": "59400" 00:18:35.128 }, 00:18:35.128 "auth": { 00:18:35.128 "state": "completed", 00:18:35.128 "digest": "sha384", 00:18:35.128 "dhgroup": "ffdhe6144" 00:18:35.128 } 00:18:35.128 } 00:18:35.128 ]' 00:18:35.128 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.128 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.128 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.128 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:35.128 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.128 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.128 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.128 12:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.390 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:35.390 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:36.335 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.335 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.335 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.335 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.335 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.335 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.335 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.335 12:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.335 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.596 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.858 { 00:18:36.858 "cntlid": 85, 00:18:36.858 "qid": 0, 00:18:36.858 "state": "enabled", 00:18:36.858 "thread": "nvmf_tgt_poll_group_000", 00:18:36.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:36.858 "listen_address": { 00:18:36.858 "trtype": "TCP", 00:18:36.858 "adrfam": "IPv4", 00:18:36.858 "traddr": "10.0.0.2", 00:18:36.858 "trsvcid": "4420" 00:18:36.858 }, 00:18:36.858 "peer_address": { 00:18:36.858 "trtype": "TCP", 00:18:36.858 "adrfam": "IPv4", 00:18:36.858 "traddr": "10.0.0.1", 00:18:36.858 "trsvcid": "59424" 00:18:36.858 }, 00:18:36.858 "auth": { 00:18:36.858 "state": "completed", 00:18:36.858 "digest": "sha384", 00:18:36.858 "dhgroup": "ffdhe6144" 00:18:36.858 } 00:18:36.858 } 00:18:36.858 ]' 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.858 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.120 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.120 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.120 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.120 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.120 12:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.120 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:37.120 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:38.063 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.063 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.063 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.063 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.063 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.063 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.063 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.063 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.325 12:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.586 00:18:38.586 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.586 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.586 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.848 { 00:18:38.848 "cntlid": 87, 00:18:38.848 "qid": 0, 00:18:38.848 "state": "enabled", 00:18:38.848 "thread": "nvmf_tgt_poll_group_000", 00:18:38.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:38.848 "listen_address": { 00:18:38.848 "trtype": "TCP", 00:18:38.848 "adrfam": "IPv4", 00:18:38.848 "traddr": "10.0.0.2", 00:18:38.848 "trsvcid": "4420" 00:18:38.848 }, 00:18:38.848 "peer_address": { 00:18:38.848 "trtype": "TCP", 00:18:38.848 "adrfam": "IPv4", 00:18:38.848 "traddr": "10.0.0.1", 00:18:38.848 "trsvcid": "59454" 00:18:38.848 }, 00:18:38.848 "auth": { 00:18:38.848 "state": "completed", 00:18:38.848 "digest": "sha384", 00:18:38.848 "dhgroup": "ffdhe6144" 00:18:38.848 } 00:18:38.848 } 00:18:38.848 ]' 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.848 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.109 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:39.109 12:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.052 12:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.625 00:18:40.625 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.625 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.625 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.625 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.625 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.887 { 00:18:40.887 "cntlid": 89, 00:18:40.887 "qid": 0, 00:18:40.887 "state": "enabled", 00:18:40.887 "thread": "nvmf_tgt_poll_group_000", 00:18:40.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:40.887 "listen_address": { 00:18:40.887 "trtype": "TCP", 00:18:40.887 "adrfam": "IPv4", 00:18:40.887 "traddr": "10.0.0.2", 00:18:40.887 "trsvcid": "4420" 00:18:40.887 }, 00:18:40.887 "peer_address": { 00:18:40.887 "trtype": "TCP", 00:18:40.887 "adrfam": "IPv4", 00:18:40.887 "traddr": "10.0.0.1", 00:18:40.887 "trsvcid": "59492" 00:18:40.887 }, 00:18:40.887 "auth": { 00:18:40.887 "state": "completed", 00:18:40.887 "digest": "sha384", 00:18:40.887 "dhgroup": "ffdhe8192" 00:18:40.887 } 00:18:40.887 } 00:18:40.887 ]' 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.887 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.148 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:41.148 12:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:41.720 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.982 12:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.555 00:18:42.555 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.555 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.555 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.816 { 00:18:42.816 "cntlid": 91, 00:18:42.816 "qid": 0, 00:18:42.816 "state": "enabled", 00:18:42.816 "thread": "nvmf_tgt_poll_group_000", 00:18:42.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:42.816 "listen_address": { 00:18:42.816 "trtype": "TCP", 00:18:42.816 "adrfam": "IPv4", 00:18:42.816 "traddr": "10.0.0.2", 00:18:42.816 "trsvcid": "4420" 00:18:42.816 }, 00:18:42.816 "peer_address": { 00:18:42.816 "trtype": "TCP", 00:18:42.816 "adrfam": "IPv4", 00:18:42.816 "traddr": "10.0.0.1", 00:18:42.816 "trsvcid": "35444" 00:18:42.816 }, 00:18:42.816 "auth": { 00:18:42.816 "state": "completed", 00:18:42.816 "digest": "sha384", 00:18:42.816 "dhgroup": "ffdhe8192" 00:18:42.816 } 00:18:42.816 } 00:18:42.816 ]' 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.816 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.077 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:43.077 12:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.020 12:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.592 00:18:44.592 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.592 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.592 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.854 { 00:18:44.854 "cntlid": 93, 00:18:44.854 "qid": 0, 00:18:44.854 "state": "enabled", 00:18:44.854 "thread": "nvmf_tgt_poll_group_000", 00:18:44.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:44.854 "listen_address": { 00:18:44.854 "trtype": "TCP", 00:18:44.854 "adrfam": "IPv4", 00:18:44.854 "traddr": "10.0.0.2", 00:18:44.854 "trsvcid": "4420" 00:18:44.854 }, 00:18:44.854 "peer_address": { 00:18:44.854 "trtype": "TCP", 00:18:44.854 "adrfam": "IPv4", 00:18:44.854 "traddr": "10.0.0.1", 00:18:44.854 "trsvcid": "35472" 00:18:44.854 }, 00:18:44.854 "auth": { 00:18:44.854 "state": "completed", 00:18:44.854 "digest": "sha384", 00:18:44.854 "dhgroup": "ffdhe8192" 00:18:44.854 } 00:18:44.854 } 00:18:44.854 ]' 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.854 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.117 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:45.117 12:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:46.061 12:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:46.634 00:18:46.634 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.634 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.634 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.634 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.634 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.635 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.635 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.895 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.895 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.895 { 00:18:46.895 "cntlid": 95, 00:18:46.895 "qid": 0, 00:18:46.895 "state": "enabled", 00:18:46.895 "thread": "nvmf_tgt_poll_group_000", 00:18:46.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:46.895 "listen_address": { 00:18:46.895 "trtype": "TCP", 00:18:46.895 "adrfam": "IPv4", 00:18:46.895 "traddr": "10.0.0.2", 00:18:46.895 "trsvcid": "4420" 00:18:46.895 }, 00:18:46.895 "peer_address": { 00:18:46.895 "trtype": "TCP", 00:18:46.895 "adrfam": "IPv4", 00:18:46.895 "traddr": "10.0.0.1", 00:18:46.895 "trsvcid": "35506" 00:18:46.895 }, 00:18:46.895 "auth": { 00:18:46.895 "state": "completed", 00:18:46.895 "digest": "sha384", 00:18:46.895 "dhgroup": "ffdhe8192" 00:18:46.895 } 00:18:46.896 } 00:18:46.896 ]' 00:18:46.896 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.896 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.896 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.896 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:46.896 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.896 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.896 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.896 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.156 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:47.156 12:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:47.728 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.728 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:47.728 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.728 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.989 12:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.249 00:18:48.249 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.249 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.249 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.510 { 00:18:48.510 "cntlid": 97, 00:18:48.510 "qid": 0, 00:18:48.510 "state": "enabled", 00:18:48.510 "thread": "nvmf_tgt_poll_group_000", 00:18:48.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:48.510 "listen_address": { 00:18:48.510 "trtype": "TCP", 00:18:48.510 "adrfam": "IPv4", 00:18:48.510 "traddr": "10.0.0.2", 00:18:48.510 "trsvcid": "4420" 00:18:48.510 }, 00:18:48.510 "peer_address": { 00:18:48.510 "trtype": "TCP", 00:18:48.510 "adrfam": "IPv4", 00:18:48.510 "traddr": "10.0.0.1", 00:18:48.510 "trsvcid": "35542" 00:18:48.510 }, 00:18:48.510 "auth": { 00:18:48.510 "state": "completed", 00:18:48.510 "digest": "sha512", 00:18:48.510 "dhgroup": "null" 00:18:48.510 } 00:18:48.510 } 00:18:48.510 ]' 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.510 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.819 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:48.819 12:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:49.499 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.499 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.499 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.499 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.499 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.499 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.499 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:49.499 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.783 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.047 00:18:50.047 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.047 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.047 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.047 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.047 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.047 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.047 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.047 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.047 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.047 { 00:18:50.047 "cntlid": 99, 00:18:50.047 "qid": 0, 00:18:50.047 "state": "enabled", 00:18:50.047 "thread": "nvmf_tgt_poll_group_000", 00:18:50.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:50.047 "listen_address": { 00:18:50.047 "trtype": "TCP", 00:18:50.047 "adrfam": "IPv4", 00:18:50.047 "traddr": "10.0.0.2", 00:18:50.047 "trsvcid": "4420" 00:18:50.047 }, 00:18:50.047 "peer_address": { 00:18:50.047 "trtype": "TCP", 00:18:50.047 "adrfam": "IPv4", 00:18:50.047 "traddr": "10.0.0.1", 00:18:50.047 "trsvcid": "35560" 00:18:50.047 }, 00:18:50.047 "auth": { 00:18:50.047 "state": "completed", 00:18:50.047 "digest": "sha512", 00:18:50.047 "dhgroup": "null" 00:18:50.047 } 00:18:50.047 } 00:18:50.047 ]' 00:18:50.047 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.307 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.307 12:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.307 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:50.307 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.307 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.307 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.307 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.566 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:50.566 12:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:51.137 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.137 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:51.137 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.137 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.398 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.658 00:18:51.658 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.658 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.658 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.918 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.918 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.918 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.918 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.918 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.918 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.918 { 00:18:51.918 "cntlid": 101, 00:18:51.918 "qid": 0, 00:18:51.918 "state": "enabled", 00:18:51.918 "thread": "nvmf_tgt_poll_group_000", 00:18:51.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:51.918 "listen_address": { 00:18:51.918 "trtype": "TCP", 00:18:51.918 "adrfam": "IPv4", 00:18:51.918 "traddr": "10.0.0.2", 00:18:51.918 "trsvcid": "4420" 00:18:51.918 }, 00:18:51.918 "peer_address": { 00:18:51.918 "trtype": "TCP", 00:18:51.918 "adrfam": "IPv4", 00:18:51.918 "traddr": "10.0.0.1", 00:18:51.918 "trsvcid": "35586" 00:18:51.919 }, 00:18:51.919 "auth": { 00:18:51.919 "state": "completed", 00:18:51.919 "digest": "sha512", 00:18:51.919 "dhgroup": "null" 00:18:51.919 } 00:18:51.919 } 00:18:51.919 ]' 00:18:51.919 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.919 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.919 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.919 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:51.919 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.919 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.919 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.919 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.179 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:52.179 12:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.120 12:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:53.380 00:18:53.380 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.380 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.380 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.640 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.640 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.640 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.640 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.640 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.640 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.640 { 00:18:53.640 "cntlid": 103, 00:18:53.640 "qid": 0, 00:18:53.640 "state": "enabled", 00:18:53.640 "thread": "nvmf_tgt_poll_group_000", 00:18:53.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:53.640 "listen_address": { 00:18:53.640 "trtype": "TCP", 00:18:53.640 "adrfam": "IPv4", 00:18:53.640 "traddr": "10.0.0.2", 00:18:53.640 "trsvcid": "4420" 00:18:53.640 }, 00:18:53.640 "peer_address": { 00:18:53.640 "trtype": "TCP", 00:18:53.640 "adrfam": "IPv4", 00:18:53.640 "traddr": "10.0.0.1", 00:18:53.640 "trsvcid": "34684" 00:18:53.640 }, 00:18:53.640 "auth": { 00:18:53.640 "state": "completed", 00:18:53.640 "digest": "sha512", 00:18:53.640 "dhgroup": "null" 00:18:53.640 } 00:18:53.640 } 00:18:53.640 ]' 00:18:53.640 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:53.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.641 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.901 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:53.901 12:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:18:54.841 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.841 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:54.841 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.841 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.841 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.841 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.841 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.841 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.842 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.103 00:18:55.103 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.103 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.103 12:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.364 { 00:18:55.364 "cntlid": 105, 00:18:55.364 "qid": 0, 00:18:55.364 "state": "enabled", 00:18:55.364 "thread": "nvmf_tgt_poll_group_000", 00:18:55.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:55.364 "listen_address": { 00:18:55.364 "trtype": "TCP", 00:18:55.364 "adrfam": "IPv4", 00:18:55.364 "traddr": "10.0.0.2", 00:18:55.364 "trsvcid": "4420" 00:18:55.364 }, 00:18:55.364 "peer_address": { 00:18:55.364 "trtype": "TCP", 00:18:55.364 "adrfam": "IPv4", 00:18:55.364 "traddr": "10.0.0.1", 00:18:55.364 "trsvcid": "34694" 00:18:55.364 }, 00:18:55.364 "auth": { 00:18:55.364 "state": "completed", 00:18:55.364 "digest": "sha512", 00:18:55.364 "dhgroup": "ffdhe2048" 00:18:55.364 } 00:18:55.364 } 00:18:55.364 ]' 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.364 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:55.625 12:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.567 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.828 00:18:56.828 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.828 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.828 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.828 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.828 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.828 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.828 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.089 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.089 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.089 { 00:18:57.089 "cntlid": 107, 00:18:57.089 "qid": 0, 00:18:57.089 "state": "enabled", 00:18:57.089 "thread": "nvmf_tgt_poll_group_000", 00:18:57.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:57.089 "listen_address": { 00:18:57.089 "trtype": "TCP", 00:18:57.089 "adrfam": "IPv4", 00:18:57.089 "traddr": "10.0.0.2", 00:18:57.089 "trsvcid": "4420" 00:18:57.089 }, 00:18:57.089 "peer_address": { 00:18:57.089 "trtype": "TCP", 00:18:57.089 "adrfam": "IPv4", 00:18:57.089 "traddr": "10.0.0.1", 00:18:57.089 "trsvcid": "34712" 00:18:57.089 }, 00:18:57.089 "auth": { 00:18:57.089 "state": "completed", 00:18:57.089 "digest": "sha512", 00:18:57.089 "dhgroup": "ffdhe2048" 00:18:57.089 } 00:18:57.089 } 00:18:57.089 ]' 00:18:57.089 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.089 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.089 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.089 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.089 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.089 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.089 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.089 12:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.351 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:57.351 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:18:57.922 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.922 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:57.922 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.922 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.183 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.183 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.183 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.183 12:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.183 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.184 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.444 00:18:58.444 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.444 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.444 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.705 { 00:18:58.705 "cntlid": 109, 00:18:58.705 "qid": 0, 00:18:58.705 "state": "enabled", 00:18:58.705 "thread": "nvmf_tgt_poll_group_000", 00:18:58.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:58.705 "listen_address": { 00:18:58.705 "trtype": "TCP", 00:18:58.705 "adrfam": "IPv4", 00:18:58.705 "traddr": "10.0.0.2", 00:18:58.705 "trsvcid": "4420" 00:18:58.705 }, 00:18:58.705 "peer_address": { 00:18:58.705 "trtype": "TCP", 00:18:58.705 "adrfam": "IPv4", 00:18:58.705 "traddr": "10.0.0.1", 00:18:58.705 "trsvcid": "34730" 00:18:58.705 }, 00:18:58.705 "auth": { 00:18:58.705 "state": "completed", 00:18:58.705 "digest": "sha512", 00:18:58.705 "dhgroup": "ffdhe2048" 00:18:58.705 } 00:18:58.705 } 00:18:58.705 ]' 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.705 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.965 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:58.965 12:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:59.909 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.170 00:19:00.170 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.170 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.170 12:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.431 { 00:19:00.431 "cntlid": 111, 00:19:00.431 "qid": 0, 00:19:00.431 "state": "enabled", 00:19:00.431 "thread": "nvmf_tgt_poll_group_000", 00:19:00.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:00.431 "listen_address": { 00:19:00.431 "trtype": "TCP", 00:19:00.431 "adrfam": "IPv4", 00:19:00.431 "traddr": "10.0.0.2", 00:19:00.431 "trsvcid": "4420" 00:19:00.431 }, 00:19:00.431 "peer_address": { 00:19:00.431 "trtype": "TCP", 00:19:00.431 "adrfam": "IPv4", 00:19:00.431 "traddr": "10.0.0.1", 00:19:00.431 "trsvcid": "34766" 00:19:00.431 }, 00:19:00.431 "auth": { 00:19:00.431 "state": "completed", 00:19:00.431 "digest": "sha512", 00:19:00.431 "dhgroup": "ffdhe2048" 00:19:00.431 } 00:19:00.431 } 00:19:00.431 ]' 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.431 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.692 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:00.692 12:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:01.635 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.636 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.898 00:19:01.898 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.898 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.898 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.159 { 00:19:02.159 "cntlid": 113, 00:19:02.159 "qid": 0, 00:19:02.159 "state": "enabled", 00:19:02.159 "thread": "nvmf_tgt_poll_group_000", 00:19:02.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:02.159 "listen_address": { 00:19:02.159 "trtype": "TCP", 00:19:02.159 "adrfam": "IPv4", 00:19:02.159 "traddr": "10.0.0.2", 00:19:02.159 "trsvcid": "4420" 00:19:02.159 }, 00:19:02.159 "peer_address": { 00:19:02.159 "trtype": "TCP", 00:19:02.159 "adrfam": "IPv4", 00:19:02.159 "traddr": "10.0.0.1", 00:19:02.159 "trsvcid": "34812" 00:19:02.159 }, 00:19:02.159 "auth": { 00:19:02.159 "state": "completed", 00:19:02.159 "digest": "sha512", 00:19:02.159 "dhgroup": "ffdhe3072" 00:19:02.159 } 00:19:02.159 } 00:19:02.159 ]' 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.159 12:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.420 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:19:02.420 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:19:03.362 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.362 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:03.362 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.362 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.362 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.362 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.362 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:03.362 12:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.362 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.624 00:19:03.624 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.624 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.624 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.885 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.885 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.886 { 00:19:03.886 "cntlid": 115, 00:19:03.886 "qid": 0, 00:19:03.886 "state": "enabled", 00:19:03.886 "thread": "nvmf_tgt_poll_group_000", 00:19:03.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:03.886 "listen_address": { 00:19:03.886 "trtype": "TCP", 00:19:03.886 "adrfam": "IPv4", 00:19:03.886 "traddr": "10.0.0.2", 00:19:03.886 "trsvcid": "4420" 00:19:03.886 }, 00:19:03.886 "peer_address": { 00:19:03.886 "trtype": "TCP", 00:19:03.886 "adrfam": "IPv4", 00:19:03.886 "traddr": "10.0.0.1", 00:19:03.886 "trsvcid": "36670" 00:19:03.886 }, 00:19:03.886 "auth": { 00:19:03.886 "state": "completed", 00:19:03.886 "digest": "sha512", 00:19:03.886 "dhgroup": "ffdhe3072" 00:19:03.886 } 00:19:03.886 } 00:19:03.886 ]' 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.886 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.146 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:19:04.146 12:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.087 12:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.348 00:19:05.348 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.348 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.348 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.608 { 00:19:05.608 "cntlid": 117, 00:19:05.608 "qid": 0, 00:19:05.608 "state": "enabled", 00:19:05.608 "thread": "nvmf_tgt_poll_group_000", 00:19:05.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:05.608 "listen_address": { 00:19:05.608 "trtype": "TCP", 00:19:05.608 "adrfam": "IPv4", 00:19:05.608 "traddr": "10.0.0.2", 00:19:05.608 "trsvcid": "4420" 00:19:05.608 }, 00:19:05.608 "peer_address": { 00:19:05.608 "trtype": "TCP", 00:19:05.608 "adrfam": "IPv4", 00:19:05.608 "traddr": "10.0.0.1", 00:19:05.608 "trsvcid": "36696" 00:19:05.608 }, 00:19:05.608 "auth": { 00:19:05.608 "state": "completed", 00:19:05.608 "digest": "sha512", 00:19:05.608 "dhgroup": "ffdhe3072" 00:19:05.608 } 00:19:05.608 } 00:19:05.608 ]' 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.608 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.869 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:19:05.869 12:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.810 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:07.071 00:19:07.071 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.071 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.071 12:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.332 { 00:19:07.332 "cntlid": 119, 00:19:07.332 "qid": 0, 00:19:07.332 "state": "enabled", 00:19:07.332 "thread": "nvmf_tgt_poll_group_000", 00:19:07.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:07.332 "listen_address": { 00:19:07.332 "trtype": "TCP", 00:19:07.332 "adrfam": "IPv4", 00:19:07.332 "traddr": "10.0.0.2", 00:19:07.332 "trsvcid": "4420" 00:19:07.332 }, 00:19:07.332 "peer_address": { 00:19:07.332 "trtype": "TCP", 00:19:07.332 "adrfam": "IPv4", 00:19:07.332 "traddr": "10.0.0.1", 00:19:07.332 "trsvcid": "36720" 00:19:07.332 }, 00:19:07.332 "auth": { 00:19:07.332 "state": "completed", 00:19:07.332 "digest": "sha512", 00:19:07.332 "dhgroup": "ffdhe3072" 00:19:07.332 } 00:19:07.332 } 00:19:07.332 ]' 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.332 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.593 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:07.593 12:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.536 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.537 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.537 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.537 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.537 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.537 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.537 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.797 00:19:08.797 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.797 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.797 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.057 { 00:19:09.057 "cntlid": 121, 00:19:09.057 "qid": 0, 00:19:09.057 "state": "enabled", 00:19:09.057 "thread": "nvmf_tgt_poll_group_000", 00:19:09.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:09.057 "listen_address": { 00:19:09.057 "trtype": "TCP", 00:19:09.057 "adrfam": "IPv4", 00:19:09.057 "traddr": "10.0.0.2", 00:19:09.057 "trsvcid": "4420" 00:19:09.057 }, 00:19:09.057 "peer_address": { 00:19:09.057 "trtype": "TCP", 00:19:09.057 "adrfam": "IPv4", 00:19:09.057 "traddr": "10.0.0.1", 00:19:09.057 "trsvcid": "36748" 00:19:09.057 }, 00:19:09.057 "auth": { 00:19:09.057 "state": "completed", 00:19:09.057 "digest": "sha512", 00:19:09.057 "dhgroup": "ffdhe4096" 00:19:09.057 } 00:19:09.057 } 00:19:09.057 ]' 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.057 12:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.317 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:19:09.317 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:19:10.257 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.257 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:10.257 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.257 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.257 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.257 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.257 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:10.257 12:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.257 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.517 00:19:10.517 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.517 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.517 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.776 { 00:19:10.776 "cntlid": 123, 00:19:10.776 "qid": 0, 00:19:10.776 "state": "enabled", 00:19:10.776 "thread": "nvmf_tgt_poll_group_000", 00:19:10.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:10.776 "listen_address": { 00:19:10.776 "trtype": "TCP", 00:19:10.776 "adrfam": "IPv4", 00:19:10.776 "traddr": "10.0.0.2", 00:19:10.776 "trsvcid": "4420" 00:19:10.776 }, 00:19:10.776 "peer_address": { 00:19:10.776 "trtype": "TCP", 00:19:10.776 "adrfam": "IPv4", 00:19:10.776 "traddr": "10.0.0.1", 00:19:10.776 "trsvcid": "36762" 00:19:10.776 }, 00:19:10.776 "auth": { 00:19:10.776 "state": "completed", 00:19:10.776 "digest": "sha512", 00:19:10.776 "dhgroup": "ffdhe4096" 00:19:10.776 } 00:19:10.776 } 00:19:10.776 ]' 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.776 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.035 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:19:11.036 12:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.976 12:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.237 00:19:12.237 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.237 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.237 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.497 { 00:19:12.497 "cntlid": 125, 00:19:12.497 "qid": 0, 00:19:12.497 "state": "enabled", 00:19:12.497 "thread": "nvmf_tgt_poll_group_000", 00:19:12.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:12.497 "listen_address": { 00:19:12.497 "trtype": "TCP", 00:19:12.497 "adrfam": "IPv4", 00:19:12.497 "traddr": "10.0.0.2", 00:19:12.497 "trsvcid": "4420" 00:19:12.497 }, 00:19:12.497 "peer_address": { 00:19:12.497 "trtype": "TCP", 00:19:12.497 "adrfam": "IPv4", 00:19:12.497 "traddr": "10.0.0.1", 00:19:12.497 "trsvcid": "48034" 00:19:12.497 }, 00:19:12.497 "auth": { 00:19:12.497 "state": "completed", 00:19:12.497 "digest": "sha512", 00:19:12.497 "dhgroup": "ffdhe4096" 00:19:12.497 } 00:19:12.497 } 00:19:12.497 ]' 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.497 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.757 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:19:12.757 12:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.696 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:13.697 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.697 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.697 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.697 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.697 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.697 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.957 00:19:13.957 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.957 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.957 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.217 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.217 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.217 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.217 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.217 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.217 12:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.217 { 00:19:14.217 "cntlid": 127, 00:19:14.217 "qid": 0, 00:19:14.217 "state": "enabled", 00:19:14.217 "thread": "nvmf_tgt_poll_group_000", 00:19:14.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:14.217 "listen_address": { 00:19:14.217 "trtype": "TCP", 00:19:14.217 "adrfam": "IPv4", 00:19:14.217 "traddr": "10.0.0.2", 00:19:14.217 "trsvcid": "4420" 00:19:14.217 }, 00:19:14.217 "peer_address": { 00:19:14.217 "trtype": "TCP", 00:19:14.217 "adrfam": "IPv4", 00:19:14.217 "traddr": "10.0.0.1", 00:19:14.217 "trsvcid": "48068" 00:19:14.217 }, 00:19:14.217 "auth": { 00:19:14.217 "state": "completed", 00:19:14.217 "digest": "sha512", 00:19:14.217 "dhgroup": "ffdhe4096" 00:19:14.217 } 00:19:14.217 } 00:19:14.217 ]' 00:19:14.217 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.217 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.217 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.217 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:14.217 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.477 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.477 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.477 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.477 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:14.477 12:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:15.417 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.418 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.989 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.989 { 00:19:15.989 "cntlid": 129, 00:19:15.989 "qid": 0, 00:19:15.989 "state": "enabled", 00:19:15.989 "thread": "nvmf_tgt_poll_group_000", 00:19:15.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:15.989 "listen_address": { 00:19:15.989 "trtype": "TCP", 00:19:15.989 "adrfam": "IPv4", 00:19:15.989 "traddr": "10.0.0.2", 00:19:15.989 "trsvcid": "4420" 00:19:15.989 }, 00:19:15.989 "peer_address": { 00:19:15.989 "trtype": "TCP", 00:19:15.989 "adrfam": "IPv4", 00:19:15.989 "traddr": "10.0.0.1", 00:19:15.989 "trsvcid": "48088" 00:19:15.989 }, 00:19:15.989 "auth": { 00:19:15.989 "state": "completed", 00:19:15.989 "digest": "sha512", 00:19:15.989 "dhgroup": "ffdhe6144" 00:19:15.989 } 00:19:15.989 } 00:19:15.989 ]' 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.989 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.249 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.249 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.249 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.249 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.249 12:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.249 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:19:16.249 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:19:17.190 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.190 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:17.190 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.190 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.190 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.190 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.190 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.191 12:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.451 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:17.451 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.451 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:17.452 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:17.452 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:17.452 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.452 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.452 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.452 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.452 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.452 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.452 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.452 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.712 00:19:17.712 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.712 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.712 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.974 { 00:19:17.974 "cntlid": 131, 00:19:17.974 "qid": 0, 00:19:17.974 "state": "enabled", 00:19:17.974 "thread": "nvmf_tgt_poll_group_000", 00:19:17.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:17.974 "listen_address": { 00:19:17.974 "trtype": "TCP", 00:19:17.974 "adrfam": "IPv4", 00:19:17.974 "traddr": "10.0.0.2", 00:19:17.974 "trsvcid": "4420" 00:19:17.974 }, 00:19:17.974 "peer_address": { 00:19:17.974 "trtype": "TCP", 00:19:17.974 "adrfam": "IPv4", 00:19:17.974 "traddr": "10.0.0.1", 00:19:17.974 "trsvcid": "48124" 00:19:17.974 }, 00:19:17.974 "auth": { 00:19:17.974 "state": "completed", 00:19:17.974 "digest": "sha512", 00:19:17.974 "dhgroup": "ffdhe6144" 00:19:17.974 } 00:19:17.974 } 00:19:17.974 ]' 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.974 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.234 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:19:18.234 12:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:19:19.175 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.175 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:19.175 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.175 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.176 12:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.437 00:19:19.437 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.437 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.437 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.713 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.713 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.713 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.713 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.713 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.713 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.713 { 00:19:19.713 "cntlid": 133, 00:19:19.713 "qid": 0, 00:19:19.713 "state": "enabled", 00:19:19.713 "thread": "nvmf_tgt_poll_group_000", 00:19:19.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:19.713 "listen_address": { 00:19:19.713 "trtype": "TCP", 00:19:19.713 "adrfam": "IPv4", 00:19:19.713 "traddr": "10.0.0.2", 00:19:19.713 "trsvcid": "4420" 00:19:19.713 }, 00:19:19.713 "peer_address": { 00:19:19.713 "trtype": "TCP", 00:19:19.713 "adrfam": "IPv4", 00:19:19.713 "traddr": "10.0.0.1", 00:19:19.713 "trsvcid": "48152" 00:19:19.713 }, 00:19:19.713 "auth": { 00:19:19.713 "state": "completed", 00:19:19.713 "digest": "sha512", 00:19:19.713 "dhgroup": "ffdhe6144" 00:19:19.713 } 00:19:19.714 } 00:19:19.714 ]' 00:19:19.714 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.714 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.714 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.051 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.051 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.051 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.051 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.051 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.051 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:19:20.051 12:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.011 12:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.271 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.531 { 00:19:21.531 "cntlid": 135, 00:19:21.531 "qid": 0, 00:19:21.531 "state": "enabled", 00:19:21.531 "thread": "nvmf_tgt_poll_group_000", 00:19:21.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:21.531 "listen_address": { 00:19:21.531 "trtype": "TCP", 00:19:21.531 "adrfam": "IPv4", 00:19:21.531 "traddr": "10.0.0.2", 00:19:21.531 "trsvcid": "4420" 00:19:21.531 }, 00:19:21.531 "peer_address": { 00:19:21.531 "trtype": "TCP", 00:19:21.531 "adrfam": "IPv4", 00:19:21.531 "traddr": "10.0.0.1", 00:19:21.531 "trsvcid": "48170" 00:19:21.531 }, 00:19:21.531 "auth": { 00:19:21.531 "state": "completed", 00:19:21.531 "digest": "sha512", 00:19:21.531 "dhgroup": "ffdhe6144" 00:19:21.531 } 00:19:21.531 } 00:19:21.531 ]' 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.531 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.791 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.791 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.791 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.791 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.791 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.791 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:21.791 12:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.735 12:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.306 00:19:23.306 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.306 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.307 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.568 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.568 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.568 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.568 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.568 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.568 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.568 { 00:19:23.568 "cntlid": 137, 00:19:23.568 "qid": 0, 00:19:23.568 "state": "enabled", 00:19:23.568 "thread": "nvmf_tgt_poll_group_000", 00:19:23.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:23.568 "listen_address": { 00:19:23.568 "trtype": "TCP", 00:19:23.568 "adrfam": "IPv4", 00:19:23.568 "traddr": "10.0.0.2", 00:19:23.568 "trsvcid": "4420" 00:19:23.568 }, 00:19:23.568 "peer_address": { 00:19:23.568 "trtype": "TCP", 00:19:23.568 "adrfam": "IPv4", 00:19:23.568 "traddr": "10.0.0.1", 00:19:23.568 "trsvcid": "58324" 00:19:23.568 }, 00:19:23.568 "auth": { 00:19:23.568 "state": "completed", 00:19:23.568 "digest": "sha512", 00:19:23.568 "dhgroup": "ffdhe8192" 00:19:23.568 } 00:19:23.568 } 00:19:23.568 ]' 00:19:23.568 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.568 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.568 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.568 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.568 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.829 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.829 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.829 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.829 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:19:23.829 12:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.769 12:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.338 00:19:25.338 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.338 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.338 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.599 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.599 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.599 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.599 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.599 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.599 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.599 { 00:19:25.599 "cntlid": 139, 00:19:25.599 "qid": 0, 00:19:25.599 "state": "enabled", 00:19:25.599 "thread": "nvmf_tgt_poll_group_000", 00:19:25.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:25.599 "listen_address": { 00:19:25.599 "trtype": "TCP", 00:19:25.599 "adrfam": "IPv4", 00:19:25.599 "traddr": "10.0.0.2", 00:19:25.599 "trsvcid": "4420" 00:19:25.599 }, 00:19:25.599 "peer_address": { 00:19:25.599 "trtype": "TCP", 00:19:25.599 "adrfam": "IPv4", 00:19:25.599 "traddr": "10.0.0.1", 00:19:25.599 "trsvcid": "58362" 00:19:25.599 }, 00:19:25.599 "auth": { 00:19:25.599 "state": "completed", 00:19:25.599 "digest": "sha512", 00:19:25.599 "dhgroup": "ffdhe8192" 00:19:25.599 } 00:19:25.599 } 00:19:25.599 ]' 00:19:25.599 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.599 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.599 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.599 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.599 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.859 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.859 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.859 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.859 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:19:25.859 12:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: --dhchap-ctrl-secret DHHC-1:02:ZWViZDExZDNkNDkzMmJjNzRkODdkMzFiMTYyNzU4OTYyOWY4NDU0ZDYxMzMwYTk44XMd5w==: 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.800 12:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.370 00:19:27.370 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.370 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.370 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.630 { 00:19:27.630 "cntlid": 141, 00:19:27.630 "qid": 0, 00:19:27.630 "state": "enabled", 00:19:27.630 "thread": "nvmf_tgt_poll_group_000", 00:19:27.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:27.630 "listen_address": { 00:19:27.630 "trtype": "TCP", 00:19:27.630 "adrfam": "IPv4", 00:19:27.630 "traddr": "10.0.0.2", 00:19:27.630 "trsvcid": "4420" 00:19:27.630 }, 00:19:27.630 "peer_address": { 00:19:27.630 "trtype": "TCP", 00:19:27.630 "adrfam": "IPv4", 00:19:27.630 "traddr": "10.0.0.1", 00:19:27.630 "trsvcid": "58390" 00:19:27.630 }, 00:19:27.630 "auth": { 00:19:27.630 "state": "completed", 00:19:27.630 "digest": "sha512", 00:19:27.630 "dhgroup": "ffdhe8192" 00:19:27.630 } 00:19:27.630 } 00:19:27.630 ]' 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.630 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.890 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:19:27.890 12:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:01:YWI3NGZlODg0ODljNjAyNTRiM2Q3MTEyOGE1MzY0OGZRbASy: 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.829 12:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.401 00:19:29.401 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.401 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.401 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.661 { 00:19:29.661 "cntlid": 143, 00:19:29.661 "qid": 0, 00:19:29.661 "state": "enabled", 00:19:29.661 "thread": "nvmf_tgt_poll_group_000", 00:19:29.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:29.661 "listen_address": { 00:19:29.661 "trtype": "TCP", 00:19:29.661 "adrfam": "IPv4", 00:19:29.661 "traddr": "10.0.0.2", 00:19:29.661 "trsvcid": "4420" 00:19:29.661 }, 00:19:29.661 "peer_address": { 00:19:29.661 "trtype": "TCP", 00:19:29.661 "adrfam": "IPv4", 00:19:29.661 "traddr": "10.0.0.1", 00:19:29.661 "trsvcid": "58424" 00:19:29.661 }, 00:19:29.661 "auth": { 00:19:29.661 "state": "completed", 00:19:29.661 "digest": "sha512", 00:19:29.661 "dhgroup": "ffdhe8192" 00:19:29.661 } 00:19:29.661 } 00:19:29.661 ]' 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.661 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.921 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:29.921 12:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:30.861 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.861 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:30.861 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.861 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.861 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.861 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:30.861 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.862 12:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.431 00:19:31.431 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.431 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.431 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.691 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.692 { 00:19:31.692 "cntlid": 145, 00:19:31.692 "qid": 0, 00:19:31.692 "state": "enabled", 00:19:31.692 "thread": "nvmf_tgt_poll_group_000", 00:19:31.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:31.692 "listen_address": { 00:19:31.692 "trtype": "TCP", 00:19:31.692 "adrfam": "IPv4", 00:19:31.692 "traddr": "10.0.0.2", 00:19:31.692 "trsvcid": "4420" 00:19:31.692 }, 00:19:31.692 "peer_address": { 00:19:31.692 "trtype": "TCP", 00:19:31.692 "adrfam": "IPv4", 00:19:31.692 "traddr": "10.0.0.1", 00:19:31.692 "trsvcid": "58462" 00:19:31.692 }, 00:19:31.692 "auth": { 00:19:31.692 "state": "completed", 00:19:31.692 "digest": "sha512", 00:19:31.692 "dhgroup": "ffdhe8192" 00:19:31.692 } 00:19:31.692 } 00:19:31.692 ]' 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.692 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.952 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:19:31.952 12:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MzQ5MjY3ZjM1OGUwM2M4MGU1OTdhYzhmYjY1ZmQwZTE2ZjI0MjZkY2ExMGUwZjdj1860UQ==: --dhchap-ctrl-secret DHHC-1:03:NDAyNDM4NDg5YzVkNWFhZmFjMTFjZjczNDhmMTE5ZDVmNmEzN2NlZDYwMGFhMmM3OWFhMDEwNWI0NGYzZDIyNVB1iAY=: 00:19:32.523 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:32.783 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:33.044 request: 00:19:33.044 { 00:19:33.044 "name": "nvme0", 00:19:33.044 "trtype": "tcp", 00:19:33.044 "traddr": "10.0.0.2", 00:19:33.044 "adrfam": "ipv4", 00:19:33.044 "trsvcid": "4420", 00:19:33.044 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:33.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:33.044 "prchk_reftag": false, 00:19:33.044 "prchk_guard": false, 00:19:33.044 "hdgst": false, 00:19:33.044 "ddgst": false, 00:19:33.044 "dhchap_key": "key2", 00:19:33.044 "allow_unrecognized_csi": false, 00:19:33.044 "method": "bdev_nvme_attach_controller", 00:19:33.044 "req_id": 1 00:19:33.044 } 00:19:33.044 Got JSON-RPC error response 00:19:33.044 response: 00:19:33.044 { 00:19:33.044 "code": -5, 00:19:33.044 "message": "Input/output error" 00:19:33.044 } 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.304 12:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:33.565 request: 00:19:33.565 { 00:19:33.565 "name": "nvme0", 00:19:33.565 "trtype": "tcp", 00:19:33.565 "traddr": "10.0.0.2", 00:19:33.565 "adrfam": "ipv4", 00:19:33.565 "trsvcid": "4420", 00:19:33.565 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:33.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:33.565 "prchk_reftag": false, 00:19:33.565 "prchk_guard": false, 00:19:33.565 "hdgst": false, 00:19:33.565 "ddgst": false, 00:19:33.565 "dhchap_key": "key1", 00:19:33.565 "dhchap_ctrlr_key": "ckey2", 00:19:33.565 "allow_unrecognized_csi": false, 00:19:33.565 "method": "bdev_nvme_attach_controller", 00:19:33.565 "req_id": 1 00:19:33.565 } 00:19:33.565 Got JSON-RPC error response 00:19:33.565 response: 00:19:33.565 { 00:19:33.565 "code": -5, 00:19:33.565 "message": "Input/output error" 00:19:33.565 } 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.825 12:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.086 request: 00:19:34.086 { 00:19:34.086 "name": "nvme0", 00:19:34.086 "trtype": "tcp", 00:19:34.086 "traddr": "10.0.0.2", 00:19:34.086 "adrfam": "ipv4", 00:19:34.086 "trsvcid": "4420", 00:19:34.086 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:34.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:34.086 "prchk_reftag": false, 00:19:34.086 "prchk_guard": false, 00:19:34.086 "hdgst": false, 00:19:34.086 "ddgst": false, 00:19:34.086 "dhchap_key": "key1", 00:19:34.086 "dhchap_ctrlr_key": "ckey1", 00:19:34.086 "allow_unrecognized_csi": false, 00:19:34.086 "method": "bdev_nvme_attach_controller", 00:19:34.086 "req_id": 1 00:19:34.086 } 00:19:34.086 Got JSON-RPC error response 00:19:34.086 response: 00:19:34.086 { 00:19:34.086 "code": -5, 00:19:34.086 "message": "Input/output error" 00:19:34.086 } 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 603513 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 603513 ']' 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 603513 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603513 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603513' 00:19:34.346 killing process with pid 603513 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 603513 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 603513 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=631419 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 631419 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 631419 ']' 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.346 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.347 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.347 12:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 631419 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 631419 ']' 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.287 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.548 null0 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ivV 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.url ]] 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.url 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fxb 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.f8J ]] 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.f8J 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KGC 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.oAi ]] 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oAi 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.XGG 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.548 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:35.809 12:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.754 nvme0n1 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.754 { 00:19:36.754 "cntlid": 1, 00:19:36.754 "qid": 0, 00:19:36.754 "state": "enabled", 00:19:36.754 "thread": "nvmf_tgt_poll_group_000", 00:19:36.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:36.754 "listen_address": { 00:19:36.754 "trtype": "TCP", 00:19:36.754 "adrfam": "IPv4", 00:19:36.754 "traddr": "10.0.0.2", 00:19:36.754 "trsvcid": "4420" 00:19:36.754 }, 00:19:36.754 "peer_address": { 00:19:36.754 "trtype": "TCP", 00:19:36.754 "adrfam": "IPv4", 00:19:36.754 "traddr": "10.0.0.1", 00:19:36.754 "trsvcid": "59616" 00:19:36.754 }, 00:19:36.754 "auth": { 00:19:36.754 "state": "completed", 00:19:36.754 "digest": "sha512", 00:19:36.754 "dhgroup": "ffdhe8192" 00:19:36.754 } 00:19:36.754 } 00:19:36.754 ]' 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.754 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.015 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.015 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.016 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.016 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:37.016 12:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.957 12:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.218 request: 00:19:38.218 { 00:19:38.218 "name": "nvme0", 00:19:38.218 "trtype": "tcp", 00:19:38.218 "traddr": "10.0.0.2", 00:19:38.218 "adrfam": "ipv4", 00:19:38.218 "trsvcid": "4420", 00:19:38.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:38.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:38.218 "prchk_reftag": false, 00:19:38.218 "prchk_guard": false, 00:19:38.218 "hdgst": false, 00:19:38.218 "ddgst": false, 00:19:38.218 "dhchap_key": "key3", 00:19:38.218 "allow_unrecognized_csi": false, 00:19:38.218 "method": "bdev_nvme_attach_controller", 00:19:38.218 "req_id": 1 00:19:38.218 } 00:19:38.218 Got JSON-RPC error response 00:19:38.218 response: 00:19:38.218 { 00:19:38.218 "code": -5, 00:19:38.218 "message": "Input/output error" 00:19:38.218 } 00:19:38.218 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:38.218 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:38.218 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:38.218 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:38.218 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:38.218 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:38.218 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:38.218 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:38.478 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:38.478 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:38.478 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:38.478 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:38.478 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.478 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:38.478 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.478 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.479 request: 00:19:38.479 { 00:19:38.479 "name": "nvme0", 00:19:38.479 "trtype": "tcp", 00:19:38.479 "traddr": "10.0.0.2", 00:19:38.479 "adrfam": "ipv4", 00:19:38.479 "trsvcid": "4420", 00:19:38.479 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:38.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:38.479 "prchk_reftag": false, 00:19:38.479 "prchk_guard": false, 00:19:38.479 "hdgst": false, 00:19:38.479 "ddgst": false, 00:19:38.479 "dhchap_key": "key3", 00:19:38.479 "allow_unrecognized_csi": false, 00:19:38.479 "method": "bdev_nvme_attach_controller", 00:19:38.479 "req_id": 1 00:19:38.479 } 00:19:38.479 Got JSON-RPC error response 00:19:38.479 response: 00:19:38.479 { 00:19:38.479 "code": -5, 00:19:38.479 "message": "Input/output error" 00:19:38.479 } 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:38.479 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:38.739 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:39.001 request: 00:19:39.001 { 00:19:39.001 "name": "nvme0", 00:19:39.001 "trtype": "tcp", 00:19:39.001 "traddr": "10.0.0.2", 00:19:39.001 "adrfam": "ipv4", 00:19:39.001 "trsvcid": "4420", 00:19:39.001 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:39.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:39.001 "prchk_reftag": false, 00:19:39.001 "prchk_guard": false, 00:19:39.001 "hdgst": false, 00:19:39.001 "ddgst": false, 00:19:39.001 "dhchap_key": "key0", 00:19:39.001 "dhchap_ctrlr_key": "key1", 00:19:39.001 "allow_unrecognized_csi": false, 00:19:39.001 "method": "bdev_nvme_attach_controller", 00:19:39.001 "req_id": 1 00:19:39.001 } 00:19:39.001 Got JSON-RPC error response 00:19:39.001 response: 00:19:39.001 { 00:19:39.001 "code": -5, 00:19:39.001 "message": "Input/output error" 00:19:39.001 } 00:19:39.001 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:39.001 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:39.001 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:39.001 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:39.001 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:39.001 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:39.001 12:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:39.262 nvme0n1 00:19:39.262 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:39.262 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:39.262 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.522 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.522 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.522 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.781 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:39.781 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.781 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.781 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.781 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:39.781 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:39.781 12:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:40.724 nvme0n1 00:19:40.724 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:40.724 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:40.724 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.724 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.724 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:40.724 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.724 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.724 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.724 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:40.724 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:40.724 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.984 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.984 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:40.984 12:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: --dhchap-ctrl-secret DHHC-1:03:ZGZlYWQ2OTVlODljNWNhODA5MTA3NjM4NWFhNjliZWQzZjI0OTIxNzgzNGNmMDNhOWYyYTY2NGZmN2FjMjkyMQDyp9w=: 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:41.927 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.928 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:41.928 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.928 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:41.928 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:41.928 12:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:42.499 request: 00:19:42.499 { 00:19:42.499 "name": "nvme0", 00:19:42.499 "trtype": "tcp", 00:19:42.499 "traddr": "10.0.0.2", 00:19:42.499 "adrfam": "ipv4", 00:19:42.499 "trsvcid": "4420", 00:19:42.499 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:42.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:42.499 "prchk_reftag": false, 00:19:42.499 "prchk_guard": false, 00:19:42.499 "hdgst": false, 00:19:42.499 "ddgst": false, 00:19:42.499 "dhchap_key": "key1", 00:19:42.499 "allow_unrecognized_csi": false, 00:19:42.499 "method": "bdev_nvme_attach_controller", 00:19:42.499 "req_id": 1 00:19:42.499 } 00:19:42.499 Got JSON-RPC error response 00:19:42.499 response: 00:19:42.499 { 00:19:42.499 "code": -5, 00:19:42.499 "message": "Input/output error" 00:19:42.499 } 00:19:42.499 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:42.499 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.499 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.499 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.499 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:42.499 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:42.499 12:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:43.438 nvme0n1 00:19:43.438 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:43.438 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:43.438 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.438 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.438 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.438 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.699 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:43.699 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.699 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.699 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.699 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:43.699 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:43.699 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:43.959 nvme0n1 00:19:43.959 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:43.959 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.959 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:43.959 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.959 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.959 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.220 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:44.220 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.220 12:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: '' 2s 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: ]] 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTFhZWViMzE4MDI0OGFiYzIwZGUzYzkzOWM1ZTY0YTSGKM52: 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:44.220 12:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:46.131 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:46.131 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:46.131 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:46.131 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:46.131 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:46.131 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: 2s 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: ]] 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MThmODY5M2U0YjFlMjIxZDE3NWMwMmIyNDZkMDM4OWJjYmYwMGVkNTMzNmZmMGUwGRXVww==: 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:46.392 12:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:48.302 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:49.242 nvme0n1 00:19:49.242 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:49.242 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.242 12:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.242 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.242 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:49.242 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:49.810 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:49.810 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:49.810 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.810 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.810 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:49.811 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.811 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.811 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.071 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:50.071 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:50.071 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:50.071 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:50.071 12:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:50.331 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:50.902 request: 00:19:50.902 { 00:19:50.902 "name": "nvme0", 00:19:50.902 "dhchap_key": "key1", 00:19:50.902 "dhchap_ctrlr_key": "key3", 00:19:50.902 "method": "bdev_nvme_set_keys", 00:19:50.902 "req_id": 1 00:19:50.902 } 00:19:50.902 Got JSON-RPC error response 00:19:50.902 response: 00:19:50.902 { 00:19:50.902 "code": -13, 00:19:50.902 "message": "Permission denied" 00:19:50.902 } 00:19:50.902 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:50.902 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.902 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.902 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.902 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:50.902 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:50.902 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.902 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:50.902 12:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:52.286 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:52.286 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:52.286 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.286 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:52.286 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:52.286 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.286 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.286 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.286 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:52.286 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:52.286 12:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:53.225 nvme0n1 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:53.225 12:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:53.485 request: 00:19:53.485 { 00:19:53.485 "name": "nvme0", 00:19:53.485 "dhchap_key": "key2", 00:19:53.485 "dhchap_ctrlr_key": "key0", 00:19:53.485 "method": "bdev_nvme_set_keys", 00:19:53.485 "req_id": 1 00:19:53.485 } 00:19:53.485 Got JSON-RPC error response 00:19:53.485 response: 00:19:53.485 { 00:19:53.485 "code": -13, 00:19:53.485 "message": "Permission denied" 00:19:53.485 } 00:19:53.485 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:53.485 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:53.485 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:53.485 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:53.486 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:53.486 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:53.486 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.744 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:53.745 12:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:54.685 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:54.685 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:54.685 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 603811 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 603811 ']' 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 603811 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 603811 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 603811' 00:19:54.945 killing process with pid 603811 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 603811 00:19:54.945 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 603811 00:19:55.206 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:55.206 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:55.206 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:55.206 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:55.206 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:55.206 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:55.206 12:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:55.206 rmmod nvme_tcp 00:19:55.206 rmmod nvme_fabrics 00:19:55.206 rmmod nvme_keyring 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 631419 ']' 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 631419 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 631419 ']' 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 631419 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 631419 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 631419' 00:19:55.206 killing process with pid 631419 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 631419 00:19:55.206 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 631419 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:55.466 12:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.009 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:58.009 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ivV /tmp/spdk.key-sha256.fxb /tmp/spdk.key-sha384.KGC /tmp/spdk.key-sha512.XGG /tmp/spdk.key-sha512.url /tmp/spdk.key-sha384.f8J /tmp/spdk.key-sha256.oAi '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:58.009 00:19:58.009 real 2m47.161s 00:19:58.009 user 6m10.801s 00:19:58.009 sys 0m25.465s 00:19:58.009 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.009 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.009 ************************************ 00:19:58.009 END TEST nvmf_auth_target 00:19:58.009 ************************************ 00:19:58.009 12:54:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:58.009 12:54:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:58.009 12:54:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:58.009 12:54:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.009 12:54:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:58.009 ************************************ 00:19:58.009 START TEST nvmf_bdevio_no_huge 00:19:58.010 ************************************ 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:58.010 * Looking for test storage... 00:19:58.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:58.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.010 --rc genhtml_branch_coverage=1 00:19:58.010 --rc genhtml_function_coverage=1 00:19:58.010 --rc genhtml_legend=1 00:19:58.010 --rc geninfo_all_blocks=1 00:19:58.010 --rc geninfo_unexecuted_blocks=1 00:19:58.010 00:19:58.010 ' 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:58.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.010 --rc genhtml_branch_coverage=1 00:19:58.010 --rc genhtml_function_coverage=1 00:19:58.010 --rc genhtml_legend=1 00:19:58.010 --rc geninfo_all_blocks=1 00:19:58.010 --rc geninfo_unexecuted_blocks=1 00:19:58.010 00:19:58.010 ' 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:58.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.010 --rc genhtml_branch_coverage=1 00:19:58.010 --rc genhtml_function_coverage=1 00:19:58.010 --rc genhtml_legend=1 00:19:58.010 --rc geninfo_all_blocks=1 00:19:58.010 --rc geninfo_unexecuted_blocks=1 00:19:58.010 00:19:58.010 ' 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:58.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.010 --rc genhtml_branch_coverage=1 00:19:58.010 --rc genhtml_function_coverage=1 00:19:58.010 --rc genhtml_legend=1 00:19:58.010 --rc geninfo_all_blocks=1 00:19:58.010 --rc geninfo_unexecuted_blocks=1 00:19:58.010 00:19:58.010 ' 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.010 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:58.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:58.011 12:54:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:06.151 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:06.152 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:06.152 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:06.152 Found net devices under 0000:31:00.0: cvl_0_0 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:06.152 Found net devices under 0000:31:00.1: cvl_0_1 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:06.152 12:54:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.152 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:06.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:20:06.413 00:20:06.413 --- 10.0.0.2 ping statistics --- 00:20:06.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.413 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:20:06.413 00:20:06.413 --- 10.0.0.1 ping statistics --- 00:20:06.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.413 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=640266 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 640266 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 640266 ']' 00:20:06.413 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.414 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.414 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.414 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.414 12:54:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:06.414 [2024-11-25 12:54:46.187202] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:20:06.414 [2024-11-25 12:54:46.187271] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:06.414 [2024-11-25 12:54:46.301011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:06.675 [2024-11-25 12:54:46.360767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.675 [2024-11-25 12:54:46.360813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.675 [2024-11-25 12:54:46.360823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.675 [2024-11-25 12:54:46.360830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.675 [2024-11-25 12:54:46.360837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.675 [2024-11-25 12:54:46.362391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:06.675 [2024-11-25 12:54:46.362551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:06.675 [2024-11-25 12:54:46.362706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.675 [2024-11-25 12:54:46.362707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.247 [2024-11-25 12:54:47.058100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.247 Malloc0 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.247 [2024-11-25 12:54:47.111869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.247 { 00:20:07.247 "params": { 00:20:07.247 "name": "Nvme$subsystem", 00:20:07.247 "trtype": "$TEST_TRANSPORT", 00:20:07.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.247 "adrfam": "ipv4", 00:20:07.247 "trsvcid": "$NVMF_PORT", 00:20:07.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.247 "hdgst": ${hdgst:-false}, 00:20:07.247 "ddgst": ${ddgst:-false} 00:20:07.247 }, 00:20:07.247 "method": "bdev_nvme_attach_controller" 00:20:07.247 } 00:20:07.247 EOF 00:20:07.247 )") 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:07.247 12:54:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:07.247 "params": { 00:20:07.247 "name": "Nvme1", 00:20:07.247 "trtype": "tcp", 00:20:07.247 "traddr": "10.0.0.2", 00:20:07.247 "adrfam": "ipv4", 00:20:07.247 "trsvcid": "4420", 00:20:07.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.247 "hdgst": false, 00:20:07.247 "ddgst": false 00:20:07.247 }, 00:20:07.247 "method": "bdev_nvme_attach_controller" 00:20:07.247 }' 00:20:07.507 [2024-11-25 12:54:47.178034] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:20:07.508 [2024-11-25 12:54:47.178121] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid640611 ] 00:20:07.508 [2024-11-25 12:54:47.268487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:07.508 [2024-11-25 12:54:47.323897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.508 [2024-11-25 12:54:47.323969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.508 [2024-11-25 12:54:47.324161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.767 I/O targets: 00:20:07.767 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:07.767 00:20:07.767 00:20:07.767 CUnit - A unit testing framework for C - Version 2.1-3 00:20:07.767 http://cunit.sourceforge.net/ 00:20:07.767 00:20:07.767 00:20:07.767 Suite: bdevio tests on: Nvme1n1 00:20:07.767 Test: blockdev write read block ...passed 00:20:07.767 Test: blockdev write zeroes read block ...passed 00:20:08.028 Test: blockdev write zeroes read no split ...passed 00:20:08.028 Test: blockdev write zeroes read split ...passed 00:20:08.028 Test: blockdev write zeroes read split partial ...passed 00:20:08.028 Test: blockdev reset ...[2024-11-25 12:54:47.710425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:08.028 [2024-11-25 12:54:47.710485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b40140 (9): Bad file descriptor 00:20:08.028 [2024-11-25 12:54:47.730950] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:08.028 passed 00:20:08.028 Test: blockdev write read 8 blocks ...passed 00:20:08.028 Test: blockdev write read size > 128k ...passed 00:20:08.028 Test: blockdev write read invalid size ...passed 00:20:08.028 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:08.028 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:08.028 Test: blockdev write read max offset ...passed 00:20:08.028 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:08.028 Test: blockdev writev readv 8 blocks ...passed 00:20:08.028 Test: blockdev writev readv 30 x 1block ...passed 00:20:08.028 Test: blockdev writev readv block ...passed 00:20:08.028 Test: blockdev writev readv size > 128k ...passed 00:20:08.028 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:08.028 Test: blockdev comparev and writev ...[2024-11-25 12:54:47.914426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.028 [2024-11-25 12:54:47.914452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.028 [2024-11-25 12:54:47.914464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.028 [2024-11-25 12:54:47.914470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:08.028 [2024-11-25 12:54:47.914996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.028 [2024-11-25 12:54:47.915005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:08.028 [2024-11-25 12:54:47.915015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.028 [2024-11-25 12:54:47.915020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:08.028 [2024-11-25 12:54:47.915510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.028 [2024-11-25 12:54:47.915519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:08.028 [2024-11-25 12:54:47.915529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.028 [2024-11-25 12:54:47.915534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:08.028 [2024-11-25 12:54:47.915990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.028 [2024-11-25 12:54:47.916000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:08.028 [2024-11-25 12:54:47.916010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:08.029 [2024-11-25 12:54:47.916015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:08.290 passed 00:20:08.290 Test: blockdev nvme passthru rw ...passed 00:20:08.290 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:54:48.000731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:08.290 [2024-11-25 12:54:48.000744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:08.290 [2024-11-25 12:54:48.001122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:08.290 [2024-11-25 12:54:48.001131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:08.290 [2024-11-25 12:54:48.001446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:08.290 [2024-11-25 12:54:48.001454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:08.290 [2024-11-25 12:54:48.001784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:08.290 [2024-11-25 12:54:48.001793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:08.290 passed 00:20:08.290 Test: blockdev nvme admin passthru ...passed 00:20:08.290 Test: blockdev copy ...passed 00:20:08.290 00:20:08.290 Run Summary: Type Total Ran Passed Failed Inactive 00:20:08.290 suites 1 1 n/a 0 0 00:20:08.290 tests 23 23 23 0 0 00:20:08.290 asserts 152 152 152 0 n/a 00:20:08.290 00:20:08.290 Elapsed time = 0.989 seconds 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:08.552 rmmod nvme_tcp 00:20:08.552 rmmod nvme_fabrics 00:20:08.552 rmmod nvme_keyring 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 640266 ']' 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 640266 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 640266 ']' 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 640266 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.552 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 640266 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 640266' 00:20:08.814 killing process with pid 640266 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 640266 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 640266 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:08.814 12:54:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.362 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:11.362 00:20:11.362 real 0m13.408s 00:20:11.362 user 0m13.720s 00:20:11.362 sys 0m7.391s 00:20:11.362 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.362 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:11.362 ************************************ 00:20:11.362 END TEST nvmf_bdevio_no_huge 00:20:11.362 ************************************ 00:20:11.362 12:54:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:11.362 12:54:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:11.362 12:54:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.362 12:54:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:11.362 ************************************ 00:20:11.362 START TEST nvmf_tls 00:20:11.362 ************************************ 00:20:11.362 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:11.362 * Looking for test storage... 00:20:11.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:11.362 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:11.362 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:11.362 12:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.362 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:11.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.363 --rc genhtml_branch_coverage=1 00:20:11.363 --rc genhtml_function_coverage=1 00:20:11.363 --rc genhtml_legend=1 00:20:11.363 --rc geninfo_all_blocks=1 00:20:11.363 --rc geninfo_unexecuted_blocks=1 00:20:11.363 00:20:11.363 ' 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:11.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.363 --rc genhtml_branch_coverage=1 00:20:11.363 --rc genhtml_function_coverage=1 00:20:11.363 --rc genhtml_legend=1 00:20:11.363 --rc geninfo_all_blocks=1 00:20:11.363 --rc geninfo_unexecuted_blocks=1 00:20:11.363 00:20:11.363 ' 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:11.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.363 --rc genhtml_branch_coverage=1 00:20:11.363 --rc genhtml_function_coverage=1 00:20:11.363 --rc genhtml_legend=1 00:20:11.363 --rc geninfo_all_blocks=1 00:20:11.363 --rc geninfo_unexecuted_blocks=1 00:20:11.363 00:20:11.363 ' 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:11.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.363 --rc genhtml_branch_coverage=1 00:20:11.363 --rc genhtml_function_coverage=1 00:20:11.363 --rc genhtml_legend=1 00:20:11.363 --rc geninfo_all_blocks=1 00:20:11.363 --rc geninfo_unexecuted_blocks=1 00:20:11.363 00:20:11.363 ' 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:11.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:11.363 12:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.658 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.658 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:19.658 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:19.658 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:19.658 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:19.658 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:19.658 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:19.658 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:19.658 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:19.658 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:19.659 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:19.659 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:19.659 Found net devices under 0000:31:00.0: cvl_0_0 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:19.659 Found net devices under 0000:31:00.1: cvl_0_1 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.659 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:19.660 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:19.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:20:19.660 00:20:19.660 --- 10.0.0.2 ping statistics --- 00:20:19.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.660 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:20:19.660 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:20:19.660 00:20:19.660 --- 10.0.0.1 ping statistics --- 00:20:19.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.660 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:20:19.660 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.660 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:19.660 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.660 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.660 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.660 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.660 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.660 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.660 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=645582 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 645582 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 645582 ']' 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.921 12:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.921 [2024-11-25 12:54:59.639742] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:20:19.921 [2024-11-25 12:54:59.639811] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.921 [2024-11-25 12:54:59.748069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.921 [2024-11-25 12:54:59.798582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.921 [2024-11-25 12:54:59.798634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.921 [2024-11-25 12:54:59.798643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.921 [2024-11-25 12:54:59.798650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.921 [2024-11-25 12:54:59.798656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.921 [2024-11-25 12:54:59.799435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.862 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.862 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:20.863 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.863 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.863 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.863 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.863 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:20.863 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:20.863 true 00:20:20.863 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:20.863 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:21.123 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:21.123 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:21.123 12:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:21.385 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:21.385 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:21.645 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:21.645 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:21.645 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:21.645 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:21.645 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:21.906 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:21.906 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:21.907 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:21.907 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:22.167 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:22.167 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:22.168 12:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:22.168 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:22.168 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:22.428 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:22.428 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:22.428 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:22.688 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:22.688 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:22.688 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:22.688 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:22.688 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:22.688 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:22.688 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:22.688 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:22.688 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:22.688 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:22.688 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.OmuKEC0pqm 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.SlKbRVhLIu 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.OmuKEC0pqm 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.SlKbRVhLIu 00:20:22.949 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:23.210 12:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:23.472 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.OmuKEC0pqm 00:20:23.472 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OmuKEC0pqm 00:20:23.472 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:23.472 [2024-11-25 12:55:03.304494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.472 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:23.732 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:23.992 [2024-11-25 12:55:03.673419] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.992 [2024-11-25 12:55:03.673755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.993 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:23.993 malloc0 00:20:23.993 12:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:24.254 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OmuKEC0pqm 00:20:24.516 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:24.777 12:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.OmuKEC0pqm 00:20:34.777 Initializing NVMe Controllers 00:20:34.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.777 Initialization complete. Launching workers. 00:20:34.777 ======================================================== 00:20:34.777 Latency(us) 00:20:34.777 Device Information : IOPS MiB/s Average min max 00:20:34.777 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18431.04 72.00 3472.44 1174.74 5069.53 00:20:34.777 ======================================================== 00:20:34.777 Total : 18431.04 72.00 3472.44 1174.74 5069.53 00:20:34.777 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OmuKEC0pqm 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OmuKEC0pqm 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=648385 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 648385 /var/tmp/bdevperf.sock 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 648385 ']' 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.777 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.778 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.778 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.778 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.778 [2024-11-25 12:55:14.587688] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:20:34.778 [2024-11-25 12:55:14.587746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648385 ] 00:20:34.778 [2024-11-25 12:55:14.652120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.038 [2024-11-25 12:55:14.681093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.038 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.038 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:35.038 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OmuKEC0pqm 00:20:35.299 12:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:35.299 [2024-11-25 12:55:15.102320] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.299 TLSTESTn1 00:20:35.560 12:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:35.560 Running I/O for 10 seconds... 00:20:37.445 5472.00 IOPS, 21.38 MiB/s [2024-11-25T11:55:18.732Z] 6018.50 IOPS, 23.51 MiB/s [2024-11-25T11:55:19.304Z] 5651.67 IOPS, 22.08 MiB/s [2024-11-25T11:55:20.688Z] 5493.25 IOPS, 21.46 MiB/s [2024-11-25T11:55:21.628Z] 5610.60 IOPS, 21.92 MiB/s [2024-11-25T11:55:22.570Z] 5764.17 IOPS, 22.52 MiB/s [2024-11-25T11:55:23.511Z] 5736.86 IOPS, 22.41 MiB/s [2024-11-25T11:55:24.454Z] 5839.75 IOPS, 22.81 MiB/s [2024-11-25T11:55:25.397Z] 5844.89 IOPS, 22.83 MiB/s [2024-11-25T11:55:25.397Z] 5850.00 IOPS, 22.85 MiB/s 00:20:45.494 Latency(us) 00:20:45.494 [2024-11-25T11:55:25.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.494 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:45.494 Verification LBA range: start 0x0 length 0x2000 00:20:45.494 TLSTESTn1 : 10.02 5848.83 22.85 0.00 0.00 21844.97 4532.91 25122.13 00:20:45.494 [2024-11-25T11:55:25.397Z] =================================================================================================================== 00:20:45.494 [2024-11-25T11:55:25.397Z] Total : 5848.83 22.85 0.00 0.00 21844.97 4532.91 25122.13 00:20:45.494 { 00:20:45.494 "results": [ 00:20:45.494 { 00:20:45.494 "job": "TLSTESTn1", 00:20:45.494 "core_mask": "0x4", 00:20:45.494 "workload": "verify", 00:20:45.494 "status": "finished", 00:20:45.494 "verify_range": { 00:20:45.494 "start": 0, 00:20:45.494 "length": 8192 00:20:45.494 }, 00:20:45.494 "queue_depth": 128, 00:20:45.494 "io_size": 4096, 00:20:45.494 "runtime": 10.023877, 00:20:45.494 "iops": 5848.834737297754, 00:20:45.494 "mibps": 22.847010692569352, 00:20:45.494 "io_failed": 0, 00:20:45.494 "io_timeout": 0, 00:20:45.494 "avg_latency_us": 21844.972833913263, 00:20:45.494 "min_latency_us": 4532.906666666667, 00:20:45.494 "max_latency_us": 25122.133333333335 00:20:45.494 } 00:20:45.494 ], 00:20:45.494 "core_count": 1 00:20:45.494 } 00:20:45.494 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:45.494 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 648385 00:20:45.494 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 648385 ']' 00:20:45.494 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 648385 00:20:45.494 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.494 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.494 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 648385 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 648385' 00:20:45.755 killing process with pid 648385 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 648385 00:20:45.755 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.755 00:20:45.755 Latency(us) 00:20:45.755 [2024-11-25T11:55:25.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.755 [2024-11-25T11:55:25.658Z] =================================================================================================================== 00:20:45.755 [2024-11-25T11:55:25.658Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 648385 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SlKbRVhLIu 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SlKbRVhLIu 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SlKbRVhLIu 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SlKbRVhLIu 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=650525 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 650525 /var/tmp/bdevperf.sock 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 650525 ']' 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.755 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.755 [2024-11-25 12:55:25.584076] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:20:45.755 [2024-11-25 12:55:25.584134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid650525 ] 00:20:45.755 [2024-11-25 12:55:25.647222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.016 [2024-11-25 12:55:25.676103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.016 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.016 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:46.016 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SlKbRVhLIu 00:20:46.279 12:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:46.279 [2024-11-25 12:55:26.073264] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.279 [2024-11-25 12:55:26.077860] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:46.279 [2024-11-25 12:55:26.078485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1894a00 (107): Transport endpoint is not connected 00:20:46.279 [2024-11-25 12:55:26.079480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1894a00 (9): Bad file descriptor 00:20:46.279 [2024-11-25 12:55:26.080483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:46.279 [2024-11-25 12:55:26.080490] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:46.279 [2024-11-25 12:55:26.080495] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:46.279 [2024-11-25 12:55:26.080503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:46.279 request: 00:20:46.279 { 00:20:46.279 "name": "TLSTEST", 00:20:46.279 "trtype": "tcp", 00:20:46.279 "traddr": "10.0.0.2", 00:20:46.279 "adrfam": "ipv4", 00:20:46.279 "trsvcid": "4420", 00:20:46.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.279 "prchk_reftag": false, 00:20:46.279 "prchk_guard": false, 00:20:46.279 "hdgst": false, 00:20:46.279 "ddgst": false, 00:20:46.279 "psk": "key0", 00:20:46.279 "allow_unrecognized_csi": false, 00:20:46.279 "method": "bdev_nvme_attach_controller", 00:20:46.279 "req_id": 1 00:20:46.279 } 00:20:46.279 Got JSON-RPC error response 00:20:46.279 response: 00:20:46.279 { 00:20:46.279 "code": -5, 00:20:46.279 "message": "Input/output error" 00:20:46.279 } 00:20:46.279 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 650525 00:20:46.279 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 650525 ']' 00:20:46.279 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 650525 00:20:46.279 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:46.279 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.279 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650525 00:20:46.279 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:46.279 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:46.279 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650525' 00:20:46.279 killing process with pid 650525 00:20:46.279 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 650525 00:20:46.279 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.279 00:20:46.279 Latency(us) 00:20:46.279 [2024-11-25T11:55:26.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.279 [2024-11-25T11:55:26.182Z] =================================================================================================================== 00:20:46.279 [2024-11-25T11:55:26.182Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:46.279 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 650525 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OmuKEC0pqm 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OmuKEC0pqm 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OmuKEC0pqm 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OmuKEC0pqm 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=650744 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 650744 /var/tmp/bdevperf.sock 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 650744 ']' 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.541 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.541 [2024-11-25 12:55:26.318642] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:20:46.541 [2024-11-25 12:55:26.318697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid650744 ] 00:20:46.541 [2024-11-25 12:55:26.385470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.541 [2024-11-25 12:55:26.413281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.802 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.802 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:46.802 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OmuKEC0pqm 00:20:46.802 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:47.063 [2024-11-25 12:55:26.822286] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.063 [2024-11-25 12:55:26.827852] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:47.063 [2024-11-25 12:55:26.827876] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:47.063 [2024-11-25 12:55:26.827895] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:47.063 [2024-11-25 12:55:26.828464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe2a00 (107): Transport endpoint is not connected 00:20:47.063 [2024-11-25 12:55:26.829461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe2a00 (9): Bad file descriptor 00:20:47.063 [2024-11-25 12:55:26.830462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:47.063 [2024-11-25 12:55:26.830470] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:47.063 [2024-11-25 12:55:26.830476] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:47.063 [2024-11-25 12:55:26.830484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:47.063 request: 00:20:47.063 { 00:20:47.063 "name": "TLSTEST", 00:20:47.063 "trtype": "tcp", 00:20:47.063 "traddr": "10.0.0.2", 00:20:47.063 "adrfam": "ipv4", 00:20:47.063 "trsvcid": "4420", 00:20:47.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.063 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:47.063 "prchk_reftag": false, 00:20:47.063 "prchk_guard": false, 00:20:47.063 "hdgst": false, 00:20:47.063 "ddgst": false, 00:20:47.063 "psk": "key0", 00:20:47.063 "allow_unrecognized_csi": false, 00:20:47.063 "method": "bdev_nvme_attach_controller", 00:20:47.063 "req_id": 1 00:20:47.063 } 00:20:47.063 Got JSON-RPC error response 00:20:47.063 response: 00:20:47.063 { 00:20:47.063 "code": -5, 00:20:47.063 "message": "Input/output error" 00:20:47.063 } 00:20:47.063 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 650744 00:20:47.063 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 650744 ']' 00:20:47.063 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 650744 00:20:47.063 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:47.063 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.063 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650744 00:20:47.063 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:47.063 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:47.063 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650744' 00:20:47.063 killing process with pid 650744 00:20:47.063 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 650744 00:20:47.063 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.063 00:20:47.063 Latency(us) 00:20:47.063 [2024-11-25T11:55:26.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.063 [2024-11-25T11:55:26.966Z] =================================================================================================================== 00:20:47.063 [2024-11-25T11:55:26.966Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:47.063 12:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 650744 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OmuKEC0pqm 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OmuKEC0pqm 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OmuKEC0pqm 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OmuKEC0pqm 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=650771 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 650771 /var/tmp/bdevperf.sock 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 650771 ']' 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.324 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.324 [2024-11-25 12:55:27.077497] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:20:47.325 [2024-11-25 12:55:27.077553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid650771 ] 00:20:47.325 [2024-11-25 12:55:27.142279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.325 [2024-11-25 12:55:27.170800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.585 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.585 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:47.585 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OmuKEC0pqm 00:20:47.585 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:47.846 [2024-11-25 12:55:27.592150] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.846 [2024-11-25 12:55:27.599720] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:47.846 [2024-11-25 12:55:27.599737] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:47.846 [2024-11-25 12:55:27.599756] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:47.846 [2024-11-25 12:55:27.600453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175aa00 (107): Transport endpoint is not connected 00:20:47.846 [2024-11-25 12:55:27.601449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175aa00 (9): Bad file descriptor 00:20:47.846 [2024-11-25 12:55:27.602450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:47.846 [2024-11-25 12:55:27.602458] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:47.846 [2024-11-25 12:55:27.602463] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:47.846 [2024-11-25 12:55:27.602471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:47.846 request: 00:20:47.846 { 00:20:47.846 "name": "TLSTEST", 00:20:47.846 "trtype": "tcp", 00:20:47.846 "traddr": "10.0.0.2", 00:20:47.846 "adrfam": "ipv4", 00:20:47.846 "trsvcid": "4420", 00:20:47.846 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:47.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.847 "prchk_reftag": false, 00:20:47.847 "prchk_guard": false, 00:20:47.847 "hdgst": false, 00:20:47.847 "ddgst": false, 00:20:47.847 "psk": "key0", 00:20:47.847 "allow_unrecognized_csi": false, 00:20:47.847 "method": "bdev_nvme_attach_controller", 00:20:47.847 "req_id": 1 00:20:47.847 } 00:20:47.847 Got JSON-RPC error response 00:20:47.847 response: 00:20:47.847 { 00:20:47.847 "code": -5, 00:20:47.847 "message": "Input/output error" 00:20:47.847 } 00:20:47.847 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 650771 00:20:47.847 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 650771 ']' 00:20:47.847 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 650771 00:20:47.847 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:47.847 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.847 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650771 00:20:47.847 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:47.847 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:47.847 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650771' 00:20:47.847 killing process with pid 650771 00:20:47.847 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 650771 00:20:47.847 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.847 00:20:47.847 Latency(us) 00:20:47.847 [2024-11-25T11:55:27.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.847 [2024-11-25T11:55:27.750Z] =================================================================================================================== 00:20:47.847 [2024-11-25T11:55:27.750Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:47.847 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 650771 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=651088 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 651088 /var/tmp/bdevperf.sock 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 651088 ']' 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.107 12:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.107 [2024-11-25 12:55:27.844038] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:20:48.108 [2024-11-25 12:55:27.844092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651088 ] 00:20:48.108 [2024-11-25 12:55:27.909109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.108 [2024-11-25 12:55:27.936716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.368 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.368 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:48.368 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:48.368 [2024-11-25 12:55:28.169365] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:48.368 [2024-11-25 12:55:28.169388] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:48.368 request: 00:20:48.368 { 00:20:48.368 "name": "key0", 00:20:48.368 "path": "", 00:20:48.368 "method": "keyring_file_add_key", 00:20:48.368 "req_id": 1 00:20:48.368 } 00:20:48.368 Got JSON-RPC error response 00:20:48.368 response: 00:20:48.368 { 00:20:48.368 "code": -1, 00:20:48.368 "message": "Operation not permitted" 00:20:48.368 } 00:20:48.368 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:48.631 [2024-11-25 12:55:28.345890] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.631 [2024-11-25 12:55:28.345912] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:48.631 request: 00:20:48.631 { 00:20:48.631 "name": "TLSTEST", 00:20:48.631 "trtype": "tcp", 00:20:48.631 "traddr": "10.0.0.2", 00:20:48.631 "adrfam": "ipv4", 00:20:48.631 "trsvcid": "4420", 00:20:48.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.631 "prchk_reftag": false, 00:20:48.631 "prchk_guard": false, 00:20:48.631 "hdgst": false, 00:20:48.631 "ddgst": false, 00:20:48.631 "psk": "key0", 00:20:48.631 "allow_unrecognized_csi": false, 00:20:48.631 "method": "bdev_nvme_attach_controller", 00:20:48.631 "req_id": 1 00:20:48.631 } 00:20:48.631 Got JSON-RPC error response 00:20:48.631 response: 00:20:48.631 { 00:20:48.631 "code": -126, 00:20:48.631 "message": "Required key not available" 00:20:48.631 } 00:20:48.631 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 651088 00:20:48.631 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 651088 ']' 00:20:48.631 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 651088 00:20:48.631 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 651088 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 651088' 00:20:48.632 killing process with pid 651088 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 651088 00:20:48.632 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.632 00:20:48.632 Latency(us) 00:20:48.632 [2024-11-25T11:55:28.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.632 [2024-11-25T11:55:28.535Z] =================================================================================================================== 00:20:48.632 [2024-11-25T11:55:28.535Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 651088 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 645582 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 645582 ']' 00:20:48.632 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 645582 00:20:48.894 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:48.894 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.894 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 645582 00:20:48.894 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:48.894 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:48.894 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 645582' 00:20:48.894 killing process with pid 645582 00:20:48.894 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 645582 00:20:48.894 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 645582 00:20:48.894 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:48.894 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:48.894 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.8XLMICOfh0 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.8XLMICOfh0 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=651136 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 651136 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 651136 ']' 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.895 12:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.166 [2024-11-25 12:55:28.806672] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:20:49.166 [2024-11-25 12:55:28.806732] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.166 [2024-11-25 12:55:28.903023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.166 [2024-11-25 12:55:28.931727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.166 [2024-11-25 12:55:28.931754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.166 [2024-11-25 12:55:28.931759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.166 [2024-11-25 12:55:28.931764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.166 [2024-11-25 12:55:28.931769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.166 [2024-11-25 12:55:28.932258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.742 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.742 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:49.742 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.742 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.742 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.742 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.742 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.8XLMICOfh0 00:20:49.742 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8XLMICOfh0 00:20:49.742 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:50.021 [2024-11-25 12:55:29.791874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.021 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:50.282 12:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:50.282 [2024-11-25 12:55:30.116672] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.282 [2024-11-25 12:55:30.116877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.282 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:50.543 malloc0 00:20:50.543 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:50.804 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8XLMICOfh0 00:20:50.804 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8XLMICOfh0 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8XLMICOfh0 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=651641 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 651641 /var/tmp/bdevperf.sock 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 651641 ']' 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.065 12:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.065 [2024-11-25 12:55:30.865365] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:20:51.065 [2024-11-25 12:55:30.865421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651641 ] 00:20:51.065 [2024-11-25 12:55:30.930169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.065 [2024-11-25 12:55:30.959233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.325 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.325 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:51.325 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8XLMICOfh0 00:20:51.587 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.587 [2024-11-25 12:55:31.380597] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.587 TLSTESTn1 00:20:51.587 12:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:51.847 Running I/O for 10 seconds... 00:20:53.730 5756.00 IOPS, 22.48 MiB/s [2024-11-25T11:55:34.574Z] 5947.50 IOPS, 23.23 MiB/s [2024-11-25T11:55:35.958Z] 5598.33 IOPS, 21.87 MiB/s [2024-11-25T11:55:36.901Z] 5431.50 IOPS, 21.22 MiB/s [2024-11-25T11:55:37.845Z] 5618.00 IOPS, 21.95 MiB/s [2024-11-25T11:55:38.784Z] 5757.50 IOPS, 22.49 MiB/s [2024-11-25T11:55:39.724Z] 5655.86 IOPS, 22.09 MiB/s [2024-11-25T11:55:40.665Z] 5656.75 IOPS, 22.10 MiB/s [2024-11-25T11:55:41.609Z] 5709.22 IOPS, 22.30 MiB/s [2024-11-25T11:55:41.609Z] 5750.70 IOPS, 22.46 MiB/s 00:21:01.706 Latency(us) 00:21:01.706 [2024-11-25T11:55:41.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.706 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.706 Verification LBA range: start 0x0 length 0x2000 00:21:01.706 TLSTESTn1 : 10.02 5752.07 22.47 0.00 0.00 22218.20 4450.99 83012.27 00:21:01.706 [2024-11-25T11:55:41.609Z] =================================================================================================================== 00:21:01.706 [2024-11-25T11:55:41.609Z] Total : 5752.07 22.47 0.00 0.00 22218.20 4450.99 83012.27 00:21:01.968 { 00:21:01.968 "results": [ 00:21:01.968 { 00:21:01.968 "job": "TLSTESTn1", 00:21:01.968 "core_mask": "0x4", 00:21:01.968 "workload": "verify", 00:21:01.968 "status": "finished", 00:21:01.968 "verify_range": { 00:21:01.968 "start": 0, 00:21:01.968 "length": 8192 00:21:01.968 }, 00:21:01.968 "queue_depth": 128, 00:21:01.968 "io_size": 4096, 00:21:01.968 "runtime": 10.019705, 00:21:01.968 "iops": 5752.065554824218, 00:21:01.968 "mibps": 22.469006073532103, 00:21:01.968 "io_failed": 0, 00:21:01.968 "io_timeout": 0, 00:21:01.968 "avg_latency_us": 22218.20336977016, 00:21:01.968 "min_latency_us": 4450.986666666667, 00:21:01.968 "max_latency_us": 83012.26666666666 00:21:01.968 } 00:21:01.968 ], 00:21:01.968 "core_count": 1 00:21:01.968 } 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 651641 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 651641 ']' 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 651641 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 651641 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 651641' 00:21:01.969 killing process with pid 651641 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 651641 00:21:01.969 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.969 00:21:01.969 Latency(us) 00:21:01.969 [2024-11-25T11:55:41.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.969 [2024-11-25T11:55:41.872Z] =================================================================================================================== 00:21:01.969 [2024-11-25T11:55:41.872Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 651641 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.8XLMICOfh0 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8XLMICOfh0 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8XLMICOfh0 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8XLMICOfh0 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8XLMICOfh0 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=653820 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 653820 /var/tmp/bdevperf.sock 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 653820 ']' 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.969 12:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.969 [2024-11-25 12:55:41.858098] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:01.969 [2024-11-25 12:55:41.858153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653820 ] 00:21:02.231 [2024-11-25 12:55:41.922780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.231 [2024-11-25 12:55:41.950816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.231 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.231 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:02.231 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8XLMICOfh0 00:21:02.492 [2024-11-25 12:55:42.183503] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8XLMICOfh0': 0100666 00:21:02.492 [2024-11-25 12:55:42.183529] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:02.492 request: 00:21:02.492 { 00:21:02.492 "name": "key0", 00:21:02.492 "path": "/tmp/tmp.8XLMICOfh0", 00:21:02.492 "method": "keyring_file_add_key", 00:21:02.493 "req_id": 1 00:21:02.493 } 00:21:02.493 Got JSON-RPC error response 00:21:02.493 response: 00:21:02.493 { 00:21:02.493 "code": -1, 00:21:02.493 "message": "Operation not permitted" 00:21:02.493 } 00:21:02.493 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:02.493 [2024-11-25 12:55:42.360017] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.493 [2024-11-25 12:55:42.360039] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:02.493 request: 00:21:02.493 { 00:21:02.493 "name": "TLSTEST", 00:21:02.493 "trtype": "tcp", 00:21:02.493 "traddr": "10.0.0.2", 00:21:02.493 "adrfam": "ipv4", 00:21:02.493 "trsvcid": "4420", 00:21:02.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.493 "prchk_reftag": false, 00:21:02.493 "prchk_guard": false, 00:21:02.493 "hdgst": false, 00:21:02.493 "ddgst": false, 00:21:02.493 "psk": "key0", 00:21:02.493 "allow_unrecognized_csi": false, 00:21:02.493 "method": "bdev_nvme_attach_controller", 00:21:02.493 "req_id": 1 00:21:02.493 } 00:21:02.493 Got JSON-RPC error response 00:21:02.493 response: 00:21:02.493 { 00:21:02.493 "code": -126, 00:21:02.493 "message": "Required key not available" 00:21:02.493 } 00:21:02.493 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 653820 00:21:02.493 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 653820 ']' 00:21:02.493 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 653820 00:21:02.493 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 653820 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 653820' 00:21:02.753 killing process with pid 653820 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 653820 00:21:02.753 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.753 00:21:02.753 Latency(us) 00:21:02.753 [2024-11-25T11:55:42.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.753 [2024-11-25T11:55:42.656Z] =================================================================================================================== 00:21:02.753 [2024-11-25T11:55:42.656Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 653820 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 651136 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 651136 ']' 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 651136 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 651136 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 651136' 00:21:02.753 killing process with pid 651136 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 651136 00:21:02.753 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 651136 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=653862 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 653862 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 653862 ']' 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.067 12:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.067 [2024-11-25 12:55:42.785951] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:03.067 [2024-11-25 12:55:42.786006] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.067 [2024-11-25 12:55:42.881734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.067 [2024-11-25 12:55:42.912337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.067 [2024-11-25 12:55:42.912364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.067 [2024-11-25 12:55:42.912369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.067 [2024-11-25 12:55:42.912374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.067 [2024-11-25 12:55:42.912378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.067 [2024-11-25 12:55:42.912854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.681 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.681 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:03.681 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.682 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.682 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.8XLMICOfh0 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.8XLMICOfh0 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.8XLMICOfh0 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8XLMICOfh0 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:03.943 [2024-11-25 12:55:43.757608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.943 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:04.205 12:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:04.205 [2024-11-25 12:55:44.078445] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.205 [2024-11-25 12:55:44.078647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.205 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:04.466 malloc0 00:21:04.466 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:04.728 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8XLMICOfh0 00:21:04.728 [2024-11-25 12:55:44.585579] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.8XLMICOfh0': 0100666 00:21:04.728 [2024-11-25 12:55:44.585601] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:04.728 request: 00:21:04.728 { 00:21:04.728 "name": "key0", 00:21:04.728 "path": "/tmp/tmp.8XLMICOfh0", 00:21:04.728 "method": "keyring_file_add_key", 00:21:04.728 "req_id": 1 00:21:04.728 } 00:21:04.728 Got JSON-RPC error response 00:21:04.728 response: 00:21:04.728 { 00:21:04.728 "code": -1, 00:21:04.728 "message": "Operation not permitted" 00:21:04.728 } 00:21:04.728 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:04.989 [2024-11-25 12:55:44.750031] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:04.989 [2024-11-25 12:55:44.750060] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:04.989 request: 00:21:04.989 { 00:21:04.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.989 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.989 "psk": "key0", 00:21:04.989 "method": "nvmf_subsystem_add_host", 00:21:04.989 "req_id": 1 00:21:04.989 } 00:21:04.989 Got JSON-RPC error response 00:21:04.989 response: 00:21:04.989 { 00:21:04.989 "code": -32603, 00:21:04.989 "message": "Internal error" 00:21:04.989 } 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 653862 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 653862 ']' 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 653862 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 653862 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 653862' 00:21:04.989 killing process with pid 653862 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 653862 00:21:04.989 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 653862 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.8XLMICOfh0 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=654455 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 654455 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 654455 ']' 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.268 12:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:05.268 [2024-11-25 12:55:44.997906] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:05.268 [2024-11-25 12:55:44.997962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.268 [2024-11-25 12:55:45.095039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.268 [2024-11-25 12:55:45.124227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.268 [2024-11-25 12:55:45.124256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.268 [2024-11-25 12:55:45.124262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.268 [2024-11-25 12:55:45.124267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.268 [2024-11-25 12:55:45.124271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.268 [2024-11-25 12:55:45.124738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.213 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.213 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:06.213 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.213 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.213 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.213 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.213 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.8XLMICOfh0 00:21:06.213 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8XLMICOfh0 00:21:06.213 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:06.213 [2024-11-25 12:55:45.960247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.213 12:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:06.474 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:06.474 [2024-11-25 12:55:46.289050] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.474 [2024-11-25 12:55:46.289256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.474 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:06.735 malloc0 00:21:06.735 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:06.997 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8XLMICOfh0 00:21:06.997 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:07.258 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=654905 00:21:07.258 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.258 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 654905 /var/tmp/bdevperf.sock 00:21:07.258 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 654905 ']' 00:21:07.258 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.258 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.258 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.258 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.258 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.258 12:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:07.258 [2024-11-25 12:55:47.018362] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:07.258 [2024-11-25 12:55:47.018416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid654905 ] 00:21:07.258 [2024-11-25 12:55:47.081272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.258 [2024-11-25 12:55:47.110124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.583 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.583 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:07.583 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8XLMICOfh0 00:21:07.583 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:07.843 [2024-11-25 12:55:47.523274] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.843 TLSTESTn1 00:21:07.843 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:08.104 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:08.104 "subsystems": [ 00:21:08.104 { 00:21:08.104 "subsystem": "keyring", 00:21:08.104 "config": [ 00:21:08.104 { 00:21:08.104 "method": "keyring_file_add_key", 00:21:08.104 "params": { 00:21:08.104 "name": "key0", 00:21:08.104 "path": "/tmp/tmp.8XLMICOfh0" 00:21:08.104 } 00:21:08.104 } 00:21:08.104 ] 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "subsystem": "iobuf", 00:21:08.104 "config": [ 00:21:08.104 { 00:21:08.104 "method": "iobuf_set_options", 00:21:08.104 "params": { 00:21:08.104 "small_pool_count": 8192, 00:21:08.104 "large_pool_count": 1024, 00:21:08.104 "small_bufsize": 8192, 00:21:08.104 "large_bufsize": 135168, 00:21:08.104 "enable_numa": false 00:21:08.104 } 00:21:08.104 } 00:21:08.104 ] 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "subsystem": "sock", 00:21:08.104 "config": [ 00:21:08.104 { 00:21:08.104 "method": "sock_set_default_impl", 00:21:08.104 "params": { 00:21:08.104 "impl_name": "posix" 00:21:08.104 } 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "method": "sock_impl_set_options", 00:21:08.104 "params": { 00:21:08.104 "impl_name": "ssl", 00:21:08.104 "recv_buf_size": 4096, 00:21:08.104 "send_buf_size": 4096, 00:21:08.104 "enable_recv_pipe": true, 00:21:08.104 "enable_quickack": false, 00:21:08.104 "enable_placement_id": 0, 00:21:08.104 "enable_zerocopy_send_server": true, 00:21:08.104 "enable_zerocopy_send_client": false, 00:21:08.104 "zerocopy_threshold": 0, 00:21:08.104 "tls_version": 0, 00:21:08.104 "enable_ktls": false 00:21:08.104 } 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "method": "sock_impl_set_options", 00:21:08.104 "params": { 00:21:08.104 "impl_name": "posix", 00:21:08.104 "recv_buf_size": 2097152, 00:21:08.104 "send_buf_size": 2097152, 00:21:08.104 "enable_recv_pipe": true, 00:21:08.104 "enable_quickack": false, 00:21:08.104 "enable_placement_id": 0, 00:21:08.104 "enable_zerocopy_send_server": true, 00:21:08.104 "enable_zerocopy_send_client": false, 00:21:08.104 "zerocopy_threshold": 0, 00:21:08.104 "tls_version": 0, 00:21:08.104 "enable_ktls": false 00:21:08.104 } 00:21:08.104 } 00:21:08.104 ] 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "subsystem": "vmd", 00:21:08.104 "config": [] 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "subsystem": "accel", 00:21:08.104 "config": [ 00:21:08.104 { 00:21:08.104 "method": "accel_set_options", 00:21:08.104 "params": { 00:21:08.104 "small_cache_size": 128, 00:21:08.104 "large_cache_size": 16, 00:21:08.104 "task_count": 2048, 00:21:08.104 "sequence_count": 2048, 00:21:08.104 "buf_count": 2048 00:21:08.104 } 00:21:08.104 } 00:21:08.104 ] 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "subsystem": "bdev", 00:21:08.104 "config": [ 00:21:08.104 { 00:21:08.104 "method": "bdev_set_options", 00:21:08.104 "params": { 00:21:08.104 "bdev_io_pool_size": 65535, 00:21:08.104 "bdev_io_cache_size": 256, 00:21:08.104 "bdev_auto_examine": true, 00:21:08.104 "iobuf_small_cache_size": 128, 00:21:08.104 "iobuf_large_cache_size": 16 00:21:08.104 } 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "method": "bdev_raid_set_options", 00:21:08.104 "params": { 00:21:08.104 "process_window_size_kb": 1024, 00:21:08.104 "process_max_bandwidth_mb_sec": 0 00:21:08.104 } 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "method": "bdev_iscsi_set_options", 00:21:08.104 "params": { 00:21:08.104 "timeout_sec": 30 00:21:08.104 } 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "method": "bdev_nvme_set_options", 00:21:08.104 "params": { 00:21:08.104 "action_on_timeout": "none", 00:21:08.104 "timeout_us": 0, 00:21:08.104 "timeout_admin_us": 0, 00:21:08.104 "keep_alive_timeout_ms": 10000, 00:21:08.104 "arbitration_burst": 0, 00:21:08.104 "low_priority_weight": 0, 00:21:08.104 "medium_priority_weight": 0, 00:21:08.104 "high_priority_weight": 0, 00:21:08.104 "nvme_adminq_poll_period_us": 10000, 00:21:08.104 "nvme_ioq_poll_period_us": 0, 00:21:08.104 "io_queue_requests": 0, 00:21:08.104 "delay_cmd_submit": true, 00:21:08.104 "transport_retry_count": 4, 00:21:08.104 "bdev_retry_count": 3, 00:21:08.104 "transport_ack_timeout": 0, 00:21:08.104 "ctrlr_loss_timeout_sec": 0, 00:21:08.104 "reconnect_delay_sec": 0, 00:21:08.104 "fast_io_fail_timeout_sec": 0, 00:21:08.104 "disable_auto_failback": false, 00:21:08.104 "generate_uuids": false, 00:21:08.104 "transport_tos": 0, 00:21:08.104 "nvme_error_stat": false, 00:21:08.104 "rdma_srq_size": 0, 00:21:08.104 "io_path_stat": false, 00:21:08.104 "allow_accel_sequence": false, 00:21:08.104 "rdma_max_cq_size": 0, 00:21:08.104 "rdma_cm_event_timeout_ms": 0, 00:21:08.104 "dhchap_digests": [ 00:21:08.104 "sha256", 00:21:08.104 "sha384", 00:21:08.104 "sha512" 00:21:08.104 ], 00:21:08.104 "dhchap_dhgroups": [ 00:21:08.104 "null", 00:21:08.104 "ffdhe2048", 00:21:08.104 "ffdhe3072", 00:21:08.104 "ffdhe4096", 00:21:08.104 "ffdhe6144", 00:21:08.104 "ffdhe8192" 00:21:08.104 ] 00:21:08.104 } 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "method": "bdev_nvme_set_hotplug", 00:21:08.104 "params": { 00:21:08.104 "period_us": 100000, 00:21:08.104 "enable": false 00:21:08.104 } 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "method": "bdev_malloc_create", 00:21:08.104 "params": { 00:21:08.104 "name": "malloc0", 00:21:08.104 "num_blocks": 8192, 00:21:08.104 "block_size": 4096, 00:21:08.104 "physical_block_size": 4096, 00:21:08.104 "uuid": "8510395c-e9ef-4c4f-a319-70a2558bf1be", 00:21:08.104 "optimal_io_boundary": 0, 00:21:08.104 "md_size": 0, 00:21:08.104 "dif_type": 0, 00:21:08.104 "dif_is_head_of_md": false, 00:21:08.104 "dif_pi_format": 0 00:21:08.104 } 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "method": "bdev_wait_for_examine" 00:21:08.104 } 00:21:08.104 ] 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "subsystem": "nbd", 00:21:08.104 "config": [] 00:21:08.104 }, 00:21:08.104 { 00:21:08.104 "subsystem": "scheduler", 00:21:08.105 "config": [ 00:21:08.105 { 00:21:08.105 "method": "framework_set_scheduler", 00:21:08.105 "params": { 00:21:08.105 "name": "static" 00:21:08.105 } 00:21:08.105 } 00:21:08.105 ] 00:21:08.105 }, 00:21:08.105 { 00:21:08.105 "subsystem": "nvmf", 00:21:08.105 "config": [ 00:21:08.105 { 00:21:08.105 "method": "nvmf_set_config", 00:21:08.105 "params": { 00:21:08.105 "discovery_filter": "match_any", 00:21:08.105 "admin_cmd_passthru": { 00:21:08.105 "identify_ctrlr": false 00:21:08.105 }, 00:21:08.105 "dhchap_digests": [ 00:21:08.105 "sha256", 00:21:08.105 "sha384", 00:21:08.105 "sha512" 00:21:08.105 ], 00:21:08.105 "dhchap_dhgroups": [ 00:21:08.105 "null", 00:21:08.105 "ffdhe2048", 00:21:08.105 "ffdhe3072", 00:21:08.105 "ffdhe4096", 00:21:08.105 "ffdhe6144", 00:21:08.105 "ffdhe8192" 00:21:08.105 ] 00:21:08.105 } 00:21:08.105 }, 00:21:08.105 { 00:21:08.105 "method": "nvmf_set_max_subsystems", 00:21:08.105 "params": { 00:21:08.105 "max_subsystems": 1024 00:21:08.105 } 00:21:08.105 }, 00:21:08.105 { 00:21:08.105 "method": "nvmf_set_crdt", 00:21:08.105 "params": { 00:21:08.105 "crdt1": 0, 00:21:08.105 "crdt2": 0, 00:21:08.105 "crdt3": 0 00:21:08.105 } 00:21:08.105 }, 00:21:08.105 { 00:21:08.105 "method": "nvmf_create_transport", 00:21:08.105 "params": { 00:21:08.105 "trtype": "TCP", 00:21:08.105 "max_queue_depth": 128, 00:21:08.105 "max_io_qpairs_per_ctrlr": 127, 00:21:08.105 "in_capsule_data_size": 4096, 00:21:08.105 "max_io_size": 131072, 00:21:08.105 "io_unit_size": 131072, 00:21:08.105 "max_aq_depth": 128, 00:21:08.105 "num_shared_buffers": 511, 00:21:08.105 "buf_cache_size": 4294967295, 00:21:08.105 "dif_insert_or_strip": false, 00:21:08.105 "zcopy": false, 00:21:08.105 "c2h_success": false, 00:21:08.105 "sock_priority": 0, 00:21:08.105 "abort_timeout_sec": 1, 00:21:08.105 "ack_timeout": 0, 00:21:08.105 "data_wr_pool_size": 0 00:21:08.105 } 00:21:08.105 }, 00:21:08.105 { 00:21:08.105 "method": "nvmf_create_subsystem", 00:21:08.105 "params": { 00:21:08.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.105 "allow_any_host": false, 00:21:08.105 "serial_number": "SPDK00000000000001", 00:21:08.105 "model_number": "SPDK bdev Controller", 00:21:08.105 "max_namespaces": 10, 00:21:08.105 "min_cntlid": 1, 00:21:08.105 "max_cntlid": 65519, 00:21:08.105 "ana_reporting": false 00:21:08.105 } 00:21:08.105 }, 00:21:08.105 { 00:21:08.105 "method": "nvmf_subsystem_add_host", 00:21:08.105 "params": { 00:21:08.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.105 "host": "nqn.2016-06.io.spdk:host1", 00:21:08.105 "psk": "key0" 00:21:08.105 } 00:21:08.105 }, 00:21:08.105 { 00:21:08.105 "method": "nvmf_subsystem_add_ns", 00:21:08.105 "params": { 00:21:08.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.105 "namespace": { 00:21:08.105 "nsid": 1, 00:21:08.105 "bdev_name": "malloc0", 00:21:08.105 "nguid": "8510395CE9EF4C4FA31970A2558BF1BE", 00:21:08.105 "uuid": "8510395c-e9ef-4c4f-a319-70a2558bf1be", 00:21:08.105 "no_auto_visible": false 00:21:08.105 } 00:21:08.105 } 00:21:08.105 }, 00:21:08.105 { 00:21:08.105 "method": "nvmf_subsystem_add_listener", 00:21:08.105 "params": { 00:21:08.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.105 "listen_address": { 00:21:08.105 "trtype": "TCP", 00:21:08.105 "adrfam": "IPv4", 00:21:08.105 "traddr": "10.0.0.2", 00:21:08.105 "trsvcid": "4420" 00:21:08.105 }, 00:21:08.105 "secure_channel": true 00:21:08.105 } 00:21:08.105 } 00:21:08.105 ] 00:21:08.105 } 00:21:08.105 ] 00:21:08.105 }' 00:21:08.105 12:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:08.366 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:08.366 "subsystems": [ 00:21:08.366 { 00:21:08.366 "subsystem": "keyring", 00:21:08.366 "config": [ 00:21:08.366 { 00:21:08.366 "method": "keyring_file_add_key", 00:21:08.366 "params": { 00:21:08.366 "name": "key0", 00:21:08.366 "path": "/tmp/tmp.8XLMICOfh0" 00:21:08.366 } 00:21:08.366 } 00:21:08.366 ] 00:21:08.366 }, 00:21:08.366 { 00:21:08.366 "subsystem": "iobuf", 00:21:08.366 "config": [ 00:21:08.366 { 00:21:08.366 "method": "iobuf_set_options", 00:21:08.366 "params": { 00:21:08.366 "small_pool_count": 8192, 00:21:08.366 "large_pool_count": 1024, 00:21:08.366 "small_bufsize": 8192, 00:21:08.366 "large_bufsize": 135168, 00:21:08.366 "enable_numa": false 00:21:08.366 } 00:21:08.366 } 00:21:08.366 ] 00:21:08.366 }, 00:21:08.366 { 00:21:08.366 "subsystem": "sock", 00:21:08.366 "config": [ 00:21:08.366 { 00:21:08.366 "method": "sock_set_default_impl", 00:21:08.366 "params": { 00:21:08.366 "impl_name": "posix" 00:21:08.366 } 00:21:08.366 }, 00:21:08.366 { 00:21:08.366 "method": "sock_impl_set_options", 00:21:08.366 "params": { 00:21:08.366 "impl_name": "ssl", 00:21:08.366 "recv_buf_size": 4096, 00:21:08.366 "send_buf_size": 4096, 00:21:08.367 "enable_recv_pipe": true, 00:21:08.367 "enable_quickack": false, 00:21:08.367 "enable_placement_id": 0, 00:21:08.367 "enable_zerocopy_send_server": true, 00:21:08.367 "enable_zerocopy_send_client": false, 00:21:08.367 "zerocopy_threshold": 0, 00:21:08.367 "tls_version": 0, 00:21:08.367 "enable_ktls": false 00:21:08.367 } 00:21:08.367 }, 00:21:08.367 { 00:21:08.367 "method": "sock_impl_set_options", 00:21:08.367 "params": { 00:21:08.367 "impl_name": "posix", 00:21:08.367 "recv_buf_size": 2097152, 00:21:08.367 "send_buf_size": 2097152, 00:21:08.367 "enable_recv_pipe": true, 00:21:08.367 "enable_quickack": false, 00:21:08.367 "enable_placement_id": 0, 00:21:08.367 "enable_zerocopy_send_server": true, 00:21:08.367 "enable_zerocopy_send_client": false, 00:21:08.367 "zerocopy_threshold": 0, 00:21:08.367 "tls_version": 0, 00:21:08.367 "enable_ktls": false 00:21:08.367 } 00:21:08.367 } 00:21:08.367 ] 00:21:08.367 }, 00:21:08.367 { 00:21:08.367 "subsystem": "vmd", 00:21:08.367 "config": [] 00:21:08.367 }, 00:21:08.367 { 00:21:08.367 "subsystem": "accel", 00:21:08.367 "config": [ 00:21:08.367 { 00:21:08.367 "method": "accel_set_options", 00:21:08.367 "params": { 00:21:08.367 "small_cache_size": 128, 00:21:08.367 "large_cache_size": 16, 00:21:08.367 "task_count": 2048, 00:21:08.367 "sequence_count": 2048, 00:21:08.367 "buf_count": 2048 00:21:08.367 } 00:21:08.367 } 00:21:08.367 ] 00:21:08.367 }, 00:21:08.367 { 00:21:08.367 "subsystem": "bdev", 00:21:08.367 "config": [ 00:21:08.367 { 00:21:08.367 "method": "bdev_set_options", 00:21:08.367 "params": { 00:21:08.367 "bdev_io_pool_size": 65535, 00:21:08.367 "bdev_io_cache_size": 256, 00:21:08.367 "bdev_auto_examine": true, 00:21:08.367 "iobuf_small_cache_size": 128, 00:21:08.367 "iobuf_large_cache_size": 16 00:21:08.367 } 00:21:08.367 }, 00:21:08.367 { 00:21:08.367 "method": "bdev_raid_set_options", 00:21:08.367 "params": { 00:21:08.367 "process_window_size_kb": 1024, 00:21:08.367 "process_max_bandwidth_mb_sec": 0 00:21:08.367 } 00:21:08.367 }, 00:21:08.367 { 00:21:08.367 "method": "bdev_iscsi_set_options", 00:21:08.367 "params": { 00:21:08.367 "timeout_sec": 30 00:21:08.367 } 00:21:08.367 }, 00:21:08.367 { 00:21:08.367 "method": "bdev_nvme_set_options", 00:21:08.367 "params": { 00:21:08.367 "action_on_timeout": "none", 00:21:08.367 "timeout_us": 0, 00:21:08.367 "timeout_admin_us": 0, 00:21:08.367 "keep_alive_timeout_ms": 10000, 00:21:08.367 "arbitration_burst": 0, 00:21:08.367 "low_priority_weight": 0, 00:21:08.367 "medium_priority_weight": 0, 00:21:08.367 "high_priority_weight": 0, 00:21:08.367 "nvme_adminq_poll_period_us": 10000, 00:21:08.367 "nvme_ioq_poll_period_us": 0, 00:21:08.367 "io_queue_requests": 512, 00:21:08.367 "delay_cmd_submit": true, 00:21:08.367 "transport_retry_count": 4, 00:21:08.367 "bdev_retry_count": 3, 00:21:08.367 "transport_ack_timeout": 0, 00:21:08.367 "ctrlr_loss_timeout_sec": 0, 00:21:08.367 "reconnect_delay_sec": 0, 00:21:08.367 "fast_io_fail_timeout_sec": 0, 00:21:08.367 "disable_auto_failback": false, 00:21:08.367 "generate_uuids": false, 00:21:08.367 "transport_tos": 0, 00:21:08.367 "nvme_error_stat": false, 00:21:08.367 "rdma_srq_size": 0, 00:21:08.367 "io_path_stat": false, 00:21:08.367 "allow_accel_sequence": false, 00:21:08.367 "rdma_max_cq_size": 0, 00:21:08.367 "rdma_cm_event_timeout_ms": 0, 00:21:08.367 "dhchap_digests": [ 00:21:08.367 "sha256", 00:21:08.367 "sha384", 00:21:08.367 "sha512" 00:21:08.367 ], 00:21:08.367 "dhchap_dhgroups": [ 00:21:08.367 "null", 00:21:08.367 "ffdhe2048", 00:21:08.367 "ffdhe3072", 00:21:08.367 "ffdhe4096", 00:21:08.367 "ffdhe6144", 00:21:08.367 "ffdhe8192" 00:21:08.367 ] 00:21:08.367 } 00:21:08.367 }, 00:21:08.367 { 00:21:08.367 "method": "bdev_nvme_attach_controller", 00:21:08.367 "params": { 00:21:08.367 "name": "TLSTEST", 00:21:08.367 "trtype": "TCP", 00:21:08.367 "adrfam": "IPv4", 00:21:08.367 "traddr": "10.0.0.2", 00:21:08.367 "trsvcid": "4420", 00:21:08.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.367 "prchk_reftag": false, 00:21:08.367 "prchk_guard": false, 00:21:08.367 "ctrlr_loss_timeout_sec": 0, 00:21:08.367 "reconnect_delay_sec": 0, 00:21:08.367 "fast_io_fail_timeout_sec": 0, 00:21:08.367 "psk": "key0", 00:21:08.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.367 "hdgst": false, 00:21:08.367 "ddgst": false, 00:21:08.367 "multipath": "multipath" 00:21:08.367 } 00:21:08.367 }, 00:21:08.367 { 00:21:08.367 "method": "bdev_nvme_set_hotplug", 00:21:08.367 "params": { 00:21:08.367 "period_us": 100000, 00:21:08.367 "enable": false 00:21:08.367 } 00:21:08.367 }, 00:21:08.367 { 00:21:08.367 "method": "bdev_wait_for_examine" 00:21:08.367 } 00:21:08.367 ] 00:21:08.367 }, 00:21:08.367 { 00:21:08.367 "subsystem": "nbd", 00:21:08.367 "config": [] 00:21:08.367 } 00:21:08.367 ] 00:21:08.367 }' 00:21:08.367 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 654905 00:21:08.367 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 654905 ']' 00:21:08.367 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 654905 00:21:08.367 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:08.367 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.367 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 654905 00:21:08.367 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:08.367 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:08.367 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 654905' 00:21:08.367 killing process with pid 654905 00:21:08.367 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 654905 00:21:08.367 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.367 00:21:08.367 Latency(us) 00:21:08.367 [2024-11-25T11:55:48.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.367 [2024-11-25T11:55:48.270Z] =================================================================================================================== 00:21:08.367 [2024-11-25T11:55:48.270Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:08.367 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 654905 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 654455 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 654455 ']' 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 654455 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 654455 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 654455' 00:21:08.628 killing process with pid 654455 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 654455 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 654455 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.628 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:08.628 "subsystems": [ 00:21:08.628 { 00:21:08.628 "subsystem": "keyring", 00:21:08.628 "config": [ 00:21:08.628 { 00:21:08.628 "method": "keyring_file_add_key", 00:21:08.628 "params": { 00:21:08.628 "name": "key0", 00:21:08.628 "path": "/tmp/tmp.8XLMICOfh0" 00:21:08.628 } 00:21:08.628 } 00:21:08.628 ] 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "subsystem": "iobuf", 00:21:08.628 "config": [ 00:21:08.628 { 00:21:08.628 "method": "iobuf_set_options", 00:21:08.628 "params": { 00:21:08.628 "small_pool_count": 8192, 00:21:08.628 "large_pool_count": 1024, 00:21:08.628 "small_bufsize": 8192, 00:21:08.628 "large_bufsize": 135168, 00:21:08.628 "enable_numa": false 00:21:08.628 } 00:21:08.628 } 00:21:08.628 ] 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "subsystem": "sock", 00:21:08.628 "config": [ 00:21:08.628 { 00:21:08.628 "method": "sock_set_default_impl", 00:21:08.628 "params": { 00:21:08.628 "impl_name": "posix" 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "sock_impl_set_options", 00:21:08.628 "params": { 00:21:08.628 "impl_name": "ssl", 00:21:08.628 "recv_buf_size": 4096, 00:21:08.628 "send_buf_size": 4096, 00:21:08.628 "enable_recv_pipe": true, 00:21:08.628 "enable_quickack": false, 00:21:08.628 "enable_placement_id": 0, 00:21:08.628 "enable_zerocopy_send_server": true, 00:21:08.628 "enable_zerocopy_send_client": false, 00:21:08.628 "zerocopy_threshold": 0, 00:21:08.628 "tls_version": 0, 00:21:08.628 "enable_ktls": false 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "sock_impl_set_options", 00:21:08.628 "params": { 00:21:08.628 "impl_name": "posix", 00:21:08.628 "recv_buf_size": 2097152, 00:21:08.628 "send_buf_size": 2097152, 00:21:08.628 "enable_recv_pipe": true, 00:21:08.628 "enable_quickack": false, 00:21:08.628 "enable_placement_id": 0, 00:21:08.628 "enable_zerocopy_send_server": true, 00:21:08.628 "enable_zerocopy_send_client": false, 00:21:08.628 "zerocopy_threshold": 0, 00:21:08.628 "tls_version": 0, 00:21:08.628 "enable_ktls": false 00:21:08.628 } 00:21:08.628 } 00:21:08.628 ] 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "subsystem": "vmd", 00:21:08.628 "config": [] 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "subsystem": "accel", 00:21:08.628 "config": [ 00:21:08.628 { 00:21:08.628 "method": "accel_set_options", 00:21:08.628 "params": { 00:21:08.628 "small_cache_size": 128, 00:21:08.628 "large_cache_size": 16, 00:21:08.628 "task_count": 2048, 00:21:08.628 "sequence_count": 2048, 00:21:08.628 "buf_count": 2048 00:21:08.628 } 00:21:08.628 } 00:21:08.628 ] 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "subsystem": "bdev", 00:21:08.628 "config": [ 00:21:08.628 { 00:21:08.628 "method": "bdev_set_options", 00:21:08.628 "params": { 00:21:08.628 "bdev_io_pool_size": 65535, 00:21:08.628 "bdev_io_cache_size": 256, 00:21:08.628 "bdev_auto_examine": true, 00:21:08.628 "iobuf_small_cache_size": 128, 00:21:08.628 "iobuf_large_cache_size": 16 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "bdev_raid_set_options", 00:21:08.628 "params": { 00:21:08.628 "process_window_size_kb": 1024, 00:21:08.628 "process_max_bandwidth_mb_sec": 0 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "bdev_iscsi_set_options", 00:21:08.628 "params": { 00:21:08.628 "timeout_sec": 30 00:21:08.628 } 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "method": "bdev_nvme_set_options", 00:21:08.628 "params": { 00:21:08.628 "action_on_timeout": "none", 00:21:08.628 "timeout_us": 0, 00:21:08.628 "timeout_admin_us": 0, 00:21:08.628 "keep_alive_timeout_ms": 10000, 00:21:08.629 "arbitration_burst": 0, 00:21:08.629 "low_priority_weight": 0, 00:21:08.629 "medium_priority_weight": 0, 00:21:08.629 "high_priority_weight": 0, 00:21:08.629 "nvme_adminq_poll_period_us": 10000, 00:21:08.629 "nvme_ioq_poll_period_us": 0, 00:21:08.629 "io_queue_requests": 0, 00:21:08.629 "delay_cmd_submit": true, 00:21:08.629 "transport_retry_count": 4, 00:21:08.629 "bdev_retry_count": 3, 00:21:08.629 "transport_ack_timeout": 0, 00:21:08.629 "ctrlr_loss_timeout_sec": 0, 00:21:08.629 "reconnect_delay_sec": 0, 00:21:08.629 "fast_io_fail_timeout_sec": 0, 00:21:08.629 "disable_auto_failback": false, 00:21:08.629 "generate_uuids": false, 00:21:08.629 "transport_tos": 0, 00:21:08.629 "nvme_error_stat": false, 00:21:08.629 "rdma_srq_size": 0, 00:21:08.629 "io_path_stat": false, 00:21:08.629 "allow_accel_sequence": false, 00:21:08.629 "rdma_max_cq_size": 0, 00:21:08.629 "rdma_cm_event_timeout_ms": 0, 00:21:08.629 "dhchap_digests": [ 00:21:08.629 "sha256", 00:21:08.629 "sha384", 00:21:08.629 "sha512" 00:21:08.629 ], 00:21:08.629 "dhchap_dhgroups": [ 00:21:08.629 "null", 00:21:08.629 "ffdhe2048", 00:21:08.629 "ffdhe3072", 00:21:08.629 "ffdhe4096", 00:21:08.629 "ffdhe6144", 00:21:08.629 "ffdhe8192" 00:21:08.629 ] 00:21:08.629 } 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "method": "bdev_nvme_set_hotplug", 00:21:08.629 "params": { 00:21:08.629 "period_us": 100000, 00:21:08.629 "enable": false 00:21:08.629 } 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "method": "bdev_malloc_create", 00:21:08.629 "params": { 00:21:08.629 "name": "malloc0", 00:21:08.629 "num_blocks": 8192, 00:21:08.629 "block_size": 4096, 00:21:08.629 "physical_block_size": 4096, 00:21:08.629 "uuid": "8510395c-e9ef-4c4f-a319-70a2558bf1be", 00:21:08.629 "optimal_io_boundary": 0, 00:21:08.629 "md_size": 0, 00:21:08.629 "dif_type": 0, 00:21:08.629 "dif_is_head_of_md": false, 00:21:08.629 "dif_pi_format": 0 00:21:08.629 } 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "method": "bdev_wait_for_examine" 00:21:08.629 } 00:21:08.629 ] 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "subsystem": "nbd", 00:21:08.629 "config": [] 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "subsystem": "scheduler", 00:21:08.629 "config": [ 00:21:08.629 { 00:21:08.629 "method": "framework_set_scheduler", 00:21:08.629 "params": { 00:21:08.629 "name": "static" 00:21:08.629 } 00:21:08.629 } 00:21:08.629 ] 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "subsystem": "nvmf", 00:21:08.629 "config": [ 00:21:08.629 { 00:21:08.629 "method": "nvmf_set_config", 00:21:08.629 "params": { 00:21:08.629 "discovery_filter": "match_any", 00:21:08.629 "admin_cmd_passthru": { 00:21:08.629 "identify_ctrlr": false 00:21:08.629 }, 00:21:08.629 "dhchap_digests": [ 00:21:08.629 "sha256", 00:21:08.629 "sha384", 00:21:08.629 "sha512" 00:21:08.629 ], 00:21:08.629 "dhchap_dhgroups": [ 00:21:08.629 "null", 00:21:08.629 "ffdhe2048", 00:21:08.629 "ffdhe3072", 00:21:08.629 "ffdhe4096", 00:21:08.629 "ffdhe6144", 00:21:08.629 "ffdhe8192" 00:21:08.629 ] 00:21:08.629 } 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "method": "nvmf_set_max_subsystems", 00:21:08.629 "params": { 00:21:08.629 "max_subsystems": 1024 00:21:08.629 } 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "method": "nvmf_set_crdt", 00:21:08.629 "params": { 00:21:08.629 "crdt1": 0, 00:21:08.629 "crdt2": 0, 00:21:08.629 "crdt3": 0 00:21:08.629 } 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "method": "nvmf_create_transport", 00:21:08.629 "params": { 00:21:08.629 "trtype": "TCP", 00:21:08.629 "max_queue_depth": 128, 00:21:08.629 "max_io_qpairs_per_ctrlr": 127, 00:21:08.629 "in_capsule_data_size": 4096, 00:21:08.629 "max_io_size": 131072, 00:21:08.629 "io_unit_size": 131072, 00:21:08.629 "max_aq_depth": 128, 00:21:08.629 "num_shared_buffers": 511, 00:21:08.629 "buf_cache_size": 4294967295, 00:21:08.629 "dif_insert_or_strip": false, 00:21:08.629 "zcopy": false, 00:21:08.629 "c2h_success": false, 00:21:08.629 "sock_priority": 0, 00:21:08.629 "abort_timeout_sec": 1, 00:21:08.629 "ack_timeout": 0, 00:21:08.629 "data_wr_pool_size": 0 00:21:08.629 } 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "method": "nvmf_create_subsystem", 00:21:08.629 "params": { 00:21:08.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.629 "allow_any_host": false, 00:21:08.629 "serial_number": "SPDK00000000000001", 00:21:08.629 "model_number": "SPDK bdev Controller", 00:21:08.629 "max_namespaces": 10, 00:21:08.629 "min_cntlid": 1, 00:21:08.629 "max_cntlid": 65519, 00:21:08.629 "ana_reporting": false 00:21:08.629 } 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "method": "nvmf_subsystem_add_host", 00:21:08.629 "params": { 00:21:08.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.629 "host": "nqn.2016-06.io.spdk:host1", 00:21:08.629 "psk": "key0" 00:21:08.629 } 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "method": "nvmf_subsystem_add_ns", 00:21:08.629 "params": { 00:21:08.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.629 "namespace": { 00:21:08.629 "nsid": 1, 00:21:08.629 "bdev_name": "malloc0", 00:21:08.629 "nguid": "8510395CE9EF4C4FA31970A2558BF1BE", 00:21:08.629 "uuid": "8510395c-e9ef-4c4f-a319-70a2558bf1be", 00:21:08.629 "no_auto_visible": false 00:21:08.629 } 00:21:08.629 } 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "method": "nvmf_subsystem_add_listener", 00:21:08.629 "params": { 00:21:08.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.629 "listen_address": { 00:21:08.629 "trtype": "TCP", 00:21:08.629 "adrfam": "IPv4", 00:21:08.629 "traddr": "10.0.0.2", 00:21:08.629 "trsvcid": "4420" 00:21:08.629 }, 00:21:08.629 "secure_channel": true 00:21:08.629 } 00:21:08.629 } 00:21:08.629 ] 00:21:08.629 } 00:21:08.629 ] 00:21:08.629 }' 00:21:08.629 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=655184 00:21:08.629 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 655184 00:21:08.629 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:08.629 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 655184 ']' 00:21:08.629 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.629 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.629 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.629 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.629 12:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.629 [2024-11-25 12:55:48.519523] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:08.629 [2024-11-25 12:55:48.519580] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.890 [2024-11-25 12:55:48.613145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.890 [2024-11-25 12:55:48.641645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.890 [2024-11-25 12:55:48.641672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.890 [2024-11-25 12:55:48.641678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.890 [2024-11-25 12:55:48.641682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.890 [2024-11-25 12:55:48.641687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.890 [2024-11-25 12:55:48.642185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.151 [2024-11-25 12:55:48.834996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.151 [2024-11-25 12:55:48.867022] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:09.151 [2024-11-25 12:55:48.867219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.412 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.412 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:09.412 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.412 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.412 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.673 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.673 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=655286 00:21:09.673 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 655286 /var/tmp/bdevperf.sock 00:21:09.673 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 655286 ']' 00:21:09.673 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.673 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.673 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.673 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:09.673 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.673 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.673 12:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:09.673 "subsystems": [ 00:21:09.673 { 00:21:09.673 "subsystem": "keyring", 00:21:09.673 "config": [ 00:21:09.673 { 00:21:09.673 "method": "keyring_file_add_key", 00:21:09.673 "params": { 00:21:09.673 "name": "key0", 00:21:09.673 "path": "/tmp/tmp.8XLMICOfh0" 00:21:09.673 } 00:21:09.673 } 00:21:09.673 ] 00:21:09.673 }, 00:21:09.673 { 00:21:09.673 "subsystem": "iobuf", 00:21:09.673 "config": [ 00:21:09.673 { 00:21:09.673 "method": "iobuf_set_options", 00:21:09.673 "params": { 00:21:09.673 "small_pool_count": 8192, 00:21:09.673 "large_pool_count": 1024, 00:21:09.673 "small_bufsize": 8192, 00:21:09.673 "large_bufsize": 135168, 00:21:09.673 "enable_numa": false 00:21:09.673 } 00:21:09.673 } 00:21:09.673 ] 00:21:09.673 }, 00:21:09.673 { 00:21:09.673 "subsystem": "sock", 00:21:09.673 "config": [ 00:21:09.673 { 00:21:09.673 "method": "sock_set_default_impl", 00:21:09.673 "params": { 00:21:09.673 "impl_name": "posix" 00:21:09.673 } 00:21:09.673 }, 00:21:09.673 { 00:21:09.673 "method": "sock_impl_set_options", 00:21:09.673 "params": { 00:21:09.673 "impl_name": "ssl", 00:21:09.673 "recv_buf_size": 4096, 00:21:09.673 "send_buf_size": 4096, 00:21:09.673 "enable_recv_pipe": true, 00:21:09.673 "enable_quickack": false, 00:21:09.673 "enable_placement_id": 0, 00:21:09.673 "enable_zerocopy_send_server": true, 00:21:09.673 "enable_zerocopy_send_client": false, 00:21:09.673 "zerocopy_threshold": 0, 00:21:09.673 "tls_version": 0, 00:21:09.673 "enable_ktls": false 00:21:09.673 } 00:21:09.673 }, 00:21:09.673 { 00:21:09.673 "method": "sock_impl_set_options", 00:21:09.673 "params": { 00:21:09.673 "impl_name": "posix", 00:21:09.673 "recv_buf_size": 2097152, 00:21:09.673 "send_buf_size": 2097152, 00:21:09.673 "enable_recv_pipe": true, 00:21:09.673 "enable_quickack": false, 00:21:09.673 "enable_placement_id": 0, 00:21:09.673 "enable_zerocopy_send_server": true, 00:21:09.673 "enable_zerocopy_send_client": false, 00:21:09.673 "zerocopy_threshold": 0, 00:21:09.673 "tls_version": 0, 00:21:09.673 "enable_ktls": false 00:21:09.673 } 00:21:09.673 } 00:21:09.673 ] 00:21:09.673 }, 00:21:09.673 { 00:21:09.673 "subsystem": "vmd", 00:21:09.673 "config": [] 00:21:09.673 }, 00:21:09.673 { 00:21:09.673 "subsystem": "accel", 00:21:09.673 "config": [ 00:21:09.673 { 00:21:09.673 "method": "accel_set_options", 00:21:09.673 "params": { 00:21:09.673 "small_cache_size": 128, 00:21:09.673 "large_cache_size": 16, 00:21:09.673 "task_count": 2048, 00:21:09.673 "sequence_count": 2048, 00:21:09.673 "buf_count": 2048 00:21:09.673 } 00:21:09.673 } 00:21:09.673 ] 00:21:09.673 }, 00:21:09.673 { 00:21:09.673 "subsystem": "bdev", 00:21:09.673 "config": [ 00:21:09.673 { 00:21:09.673 "method": "bdev_set_options", 00:21:09.673 "params": { 00:21:09.673 "bdev_io_pool_size": 65535, 00:21:09.673 "bdev_io_cache_size": 256, 00:21:09.673 "bdev_auto_examine": true, 00:21:09.673 "iobuf_small_cache_size": 128, 00:21:09.673 "iobuf_large_cache_size": 16 00:21:09.673 } 00:21:09.673 }, 00:21:09.673 { 00:21:09.673 "method": "bdev_raid_set_options", 00:21:09.673 "params": { 00:21:09.673 "process_window_size_kb": 1024, 00:21:09.673 "process_max_bandwidth_mb_sec": 0 00:21:09.673 } 00:21:09.673 }, 00:21:09.673 { 00:21:09.673 "method": "bdev_iscsi_set_options", 00:21:09.673 "params": { 00:21:09.673 "timeout_sec": 30 00:21:09.673 } 00:21:09.673 }, 00:21:09.673 { 00:21:09.673 "method": "bdev_nvme_set_options", 00:21:09.673 "params": { 00:21:09.673 "action_on_timeout": "none", 00:21:09.673 "timeout_us": 0, 00:21:09.673 "timeout_admin_us": 0, 00:21:09.673 "keep_alive_timeout_ms": 10000, 00:21:09.673 "arbitration_burst": 0, 00:21:09.673 "low_priority_weight": 0, 00:21:09.673 "medium_priority_weight": 0, 00:21:09.673 "high_priority_weight": 0, 00:21:09.673 "nvme_adminq_poll_period_us": 10000, 00:21:09.673 "nvme_ioq_poll_period_us": 0, 00:21:09.673 "io_queue_requests": 512, 00:21:09.673 "delay_cmd_submit": true, 00:21:09.673 "transport_retry_count": 4, 00:21:09.673 "bdev_retry_count": 3, 00:21:09.673 "transport_ack_timeout": 0, 00:21:09.673 "ctrlr_loss_timeout_sec": 0, 00:21:09.673 "reconnect_delay_sec": 0, 00:21:09.673 "fast_io_fail_timeout_sec": 0, 00:21:09.673 "disable_auto_failback": false, 00:21:09.673 "generate_uuids": false, 00:21:09.673 "transport_tos": 0, 00:21:09.673 "nvme_error_stat": false, 00:21:09.673 "rdma_srq_size": 0, 00:21:09.673 "io_path_stat": false, 00:21:09.673 "allow_accel_sequence": false, 00:21:09.673 "rdma_max_cq_size": 0, 00:21:09.673 "rdma_cm_event_timeout_ms": 0, 00:21:09.673 "dhchap_digests": [ 00:21:09.673 "sha256", 00:21:09.673 "sha384", 00:21:09.673 "sha512" 00:21:09.673 ], 00:21:09.673 "dhchap_dhgroups": [ 00:21:09.673 "null", 00:21:09.674 "ffdhe2048", 00:21:09.674 "ffdhe3072", 00:21:09.674 "ffdhe4096", 00:21:09.674 "ffdhe6144", 00:21:09.674 "ffdhe8192" 00:21:09.674 ] 00:21:09.674 } 00:21:09.674 }, 00:21:09.674 { 00:21:09.674 "method": "bdev_nvme_attach_controller", 00:21:09.674 "params": { 00:21:09.674 "name": "TLSTEST", 00:21:09.674 "trtype": "TCP", 00:21:09.674 "adrfam": "IPv4", 00:21:09.674 "traddr": "10.0.0.2", 00:21:09.674 "trsvcid": "4420", 00:21:09.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.674 "prchk_reftag": false, 00:21:09.674 "prchk_guard": false, 00:21:09.674 "ctrlr_loss_timeout_sec": 0, 00:21:09.674 "reconnect_delay_sec": 0, 00:21:09.674 "fast_io_fail_timeout_sec": 0, 00:21:09.674 "psk": "key0", 00:21:09.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.674 "hdgst": false, 00:21:09.674 "ddgst": false, 00:21:09.674 "multipath": "multipath" 00:21:09.674 } 00:21:09.674 }, 00:21:09.674 { 00:21:09.674 "method": "bdev_nvme_set_hotplug", 00:21:09.674 "params": { 00:21:09.674 "period_us": 100000, 00:21:09.674 "enable": false 00:21:09.674 } 00:21:09.674 }, 00:21:09.674 { 00:21:09.674 "method": "bdev_wait_for_examine" 00:21:09.674 } 00:21:09.674 ] 00:21:09.674 }, 00:21:09.674 { 00:21:09.674 "subsystem": "nbd", 00:21:09.674 "config": [] 00:21:09.674 } 00:21:09.674 ] 00:21:09.674 }' 00:21:09.674 [2024-11-25 12:55:49.405153] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:09.674 [2024-11-25 12:55:49.405208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655286 ] 00:21:09.674 [2024-11-25 12:55:49.470241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.674 [2024-11-25 12:55:49.499260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.933 [2024-11-25 12:55:49.633161] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:10.504 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.504 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:10.504 12:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:10.504 Running I/O for 10 seconds... 00:21:12.390 5937.00 IOPS, 23.19 MiB/s [2024-11-25T11:55:53.679Z] 5685.50 IOPS, 22.21 MiB/s [2024-11-25T11:55:54.621Z] 5892.33 IOPS, 23.02 MiB/s [2024-11-25T11:55:55.562Z] 5874.50 IOPS, 22.95 MiB/s [2024-11-25T11:55:56.504Z] 5627.40 IOPS, 21.98 MiB/s [2024-11-25T11:55:57.444Z] 5709.67 IOPS, 22.30 MiB/s [2024-11-25T11:55:58.386Z] 5740.00 IOPS, 22.42 MiB/s [2024-11-25T11:55:59.331Z] 5731.12 IOPS, 22.39 MiB/s [2024-11-25T11:56:00.719Z] 5732.78 IOPS, 22.39 MiB/s [2024-11-25T11:56:00.719Z] 5726.10 IOPS, 22.37 MiB/s 00:21:20.816 Latency(us) 00:21:20.816 [2024-11-25T11:56:00.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.816 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.816 Verification LBA range: start 0x0 length 0x2000 00:21:20.816 TLSTESTn1 : 10.02 5728.21 22.38 0.00 0.00 22310.66 4724.05 34952.53 00:21:20.816 [2024-11-25T11:56:00.719Z] =================================================================================================================== 00:21:20.816 [2024-11-25T11:56:00.719Z] Total : 5728.21 22.38 0.00 0.00 22310.66 4724.05 34952.53 00:21:20.816 { 00:21:20.816 "results": [ 00:21:20.816 { 00:21:20.816 "job": "TLSTESTn1", 00:21:20.816 "core_mask": "0x4", 00:21:20.816 "workload": "verify", 00:21:20.816 "status": "finished", 00:21:20.816 "verify_range": { 00:21:20.816 "start": 0, 00:21:20.816 "length": 8192 00:21:20.816 }, 00:21:20.816 "queue_depth": 128, 00:21:20.816 "io_size": 4096, 00:21:20.816 "runtime": 10.018482, 00:21:20.816 "iops": 5728.213116518052, 00:21:20.816 "mibps": 22.37583248639864, 00:21:20.816 "io_failed": 0, 00:21:20.816 "io_timeout": 0, 00:21:20.816 "avg_latency_us": 22310.656529355732, 00:21:20.816 "min_latency_us": 4724.053333333333, 00:21:20.816 "max_latency_us": 34952.53333333333 00:21:20.816 } 00:21:20.816 ], 00:21:20.816 "core_count": 1 00:21:20.816 } 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 655286 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 655286 ']' 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 655286 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 655286 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 655286' 00:21:20.816 killing process with pid 655286 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 655286 00:21:20.816 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.816 00:21:20.816 Latency(us) 00:21:20.816 [2024-11-25T11:56:00.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.816 [2024-11-25T11:56:00.719Z] =================================================================================================================== 00:21:20.816 [2024-11-25T11:56:00.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 655286 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 655184 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 655184 ']' 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 655184 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 655184 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 655184' 00:21:20.816 killing process with pid 655184 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 655184 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 655184 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=657538 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 657538 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 657538 ']' 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.816 12:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.077 [2024-11-25 12:56:00.739684] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:21.077 [2024-11-25 12:56:00.739742] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.077 [2024-11-25 12:56:00.823126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.077 [2024-11-25 12:56:00.857761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.077 [2024-11-25 12:56:00.857796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.077 [2024-11-25 12:56:00.857803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.077 [2024-11-25 12:56:00.857810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.077 [2024-11-25 12:56:00.857816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.077 [2024-11-25 12:56:00.858397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.649 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.649 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:21.649 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:21.649 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.649 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.910 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.910 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.8XLMICOfh0 00:21:21.910 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.8XLMICOfh0 00:21:21.910 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:21.910 [2024-11-25 12:56:01.710931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.910 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:22.171 12:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:22.171 [2024-11-25 12:56:02.031718] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.171 [2024-11-25 12:56:02.031959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.171 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:22.431 malloc0 00:21:22.431 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:22.692 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.8XLMICOfh0 00:21:22.692 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:22.954 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=657992 00:21:22.954 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:22.954 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:22.954 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 657992 /var/tmp/bdevperf.sock 00:21:22.954 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 657992 ']' 00:21:22.954 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.954 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.954 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.954 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.954 12:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.954 [2024-11-25 12:56:02.754274] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:22.954 [2024-11-25 12:56:02.754328] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657992 ] 00:21:22.954 [2024-11-25 12:56:02.843667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.215 [2024-11-25 12:56:02.873727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.787 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.787 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:23.787 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8XLMICOfh0 00:21:24.048 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:24.048 [2024-11-25 12:56:03.861247] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.048 nvme0n1 00:21:24.309 12:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.309 Running I/O for 1 seconds... 00:21:25.253 4107.00 IOPS, 16.04 MiB/s 00:21:25.253 Latency(us) 00:21:25.253 [2024-11-25T11:56:05.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.253 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:25.253 Verification LBA range: start 0x0 length 0x2000 00:21:25.253 nvme0n1 : 1.02 4172.54 16.30 0.00 0.00 30475.41 6034.77 62040.75 00:21:25.253 [2024-11-25T11:56:05.156Z] =================================================================================================================== 00:21:25.253 [2024-11-25T11:56:05.156Z] Total : 4172.54 16.30 0.00 0.00 30475.41 6034.77 62040.75 00:21:25.253 { 00:21:25.253 "results": [ 00:21:25.253 { 00:21:25.253 "job": "nvme0n1", 00:21:25.253 "core_mask": "0x2", 00:21:25.253 "workload": "verify", 00:21:25.253 "status": "finished", 00:21:25.253 "verify_range": { 00:21:25.253 "start": 0, 00:21:25.253 "length": 8192 00:21:25.253 }, 00:21:25.253 "queue_depth": 128, 00:21:25.253 "io_size": 4096, 00:21:25.253 "runtime": 1.01521, 00:21:25.253 "iops": 4172.535731523527, 00:21:25.253 "mibps": 16.298967701263777, 00:21:25.253 "io_failed": 0, 00:21:25.253 "io_timeout": 0, 00:21:25.253 "avg_latency_us": 30475.413232609382, 00:21:25.253 "min_latency_us": 6034.7733333333335, 00:21:25.253 "max_latency_us": 62040.746666666666 00:21:25.253 } 00:21:25.253 ], 00:21:25.253 "core_count": 1 00:21:25.253 } 00:21:25.253 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 657992 00:21:25.253 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 657992 ']' 00:21:25.253 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 657992 00:21:25.253 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:25.253 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.253 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 657992 00:21:25.253 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:25.253 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:25.253 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 657992' 00:21:25.253 killing process with pid 657992 00:21:25.253 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 657992 00:21:25.253 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.254 00:21:25.254 Latency(us) 00:21:25.254 [2024-11-25T11:56:05.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.254 [2024-11-25T11:56:05.157Z] =================================================================================================================== 00:21:25.254 [2024-11-25T11:56:05.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.254 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 657992 00:21:25.515 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 657538 00:21:25.515 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 657538 ']' 00:21:25.515 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 657538 00:21:25.515 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:25.515 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.515 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 657538 00:21:25.515 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.515 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.515 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 657538' 00:21:25.515 killing process with pid 657538 00:21:25.515 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 657538 00:21:25.515 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 657538 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=658374 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 658374 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 658374 ']' 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.776 12:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.776 [2024-11-25 12:56:05.502262] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:25.776 [2024-11-25 12:56:05.502323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.776 [2024-11-25 12:56:05.585875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.776 [2024-11-25 12:56:05.620304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.776 [2024-11-25 12:56:05.620337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.777 [2024-11-25 12:56:05.620345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.777 [2024-11-25 12:56:05.620351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.777 [2024-11-25 12:56:05.620357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.777 [2024-11-25 12:56:05.620928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.720 [2024-11-25 12:56:06.329017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.720 malloc0 00:21:26.720 [2024-11-25 12:56:06.355689] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.720 [2024-11-25 12:56:06.355921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=658704 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 658704 /var/tmp/bdevperf.sock 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 658704 ']' 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.720 12:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.720 [2024-11-25 12:56:06.434540] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:26.720 [2024-11-25 12:56:06.434589] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658704 ] 00:21:26.720 [2024-11-25 12:56:06.523560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.720 [2024-11-25 12:56:06.553445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.662 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.662 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:27.663 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8XLMICOfh0 00:21:27.663 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:27.663 [2024-11-25 12:56:07.561119] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.923 nvme0n1 00:21:27.923 12:56:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.923 Running I/O for 1 seconds... 00:21:28.865 4668.00 IOPS, 18.23 MiB/s 00:21:28.865 Latency(us) 00:21:28.865 [2024-11-25T11:56:08.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.865 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:28.865 Verification LBA range: start 0x0 length 0x2000 00:21:28.865 nvme0n1 : 1.01 4727.42 18.47 0.00 0.00 26894.89 5952.85 74274.13 00:21:28.865 [2024-11-25T11:56:08.768Z] =================================================================================================================== 00:21:28.865 [2024-11-25T11:56:08.768Z] Total : 4727.42 18.47 0.00 0.00 26894.89 5952.85 74274.13 00:21:28.865 { 00:21:28.865 "results": [ 00:21:28.865 { 00:21:28.865 "job": "nvme0n1", 00:21:28.865 "core_mask": "0x2", 00:21:28.865 "workload": "verify", 00:21:28.865 "status": "finished", 00:21:28.865 "verify_range": { 00:21:28.865 "start": 0, 00:21:28.865 "length": 8192 00:21:28.865 }, 00:21:28.865 "queue_depth": 128, 00:21:28.865 "io_size": 4096, 00:21:28.865 "runtime": 1.014506, 00:21:28.865 "iops": 4727.423987635361, 00:21:28.865 "mibps": 18.46649995170063, 00:21:28.865 "io_failed": 0, 00:21:28.865 "io_timeout": 0, 00:21:28.865 "avg_latency_us": 26894.88600500417, 00:21:28.865 "min_latency_us": 5952.8533333333335, 00:21:28.865 "max_latency_us": 74274.13333333333 00:21:28.865 } 00:21:28.865 ], 00:21:28.865 "core_count": 1 00:21:28.865 } 00:21:29.127 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:29.127 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.127 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.127 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.127 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:29.127 "subsystems": [ 00:21:29.127 { 00:21:29.127 "subsystem": "keyring", 00:21:29.127 "config": [ 00:21:29.127 { 00:21:29.127 "method": "keyring_file_add_key", 00:21:29.127 "params": { 00:21:29.127 "name": "key0", 00:21:29.127 "path": "/tmp/tmp.8XLMICOfh0" 00:21:29.127 } 00:21:29.127 } 00:21:29.127 ] 00:21:29.127 }, 00:21:29.127 { 00:21:29.127 "subsystem": "iobuf", 00:21:29.127 "config": [ 00:21:29.127 { 00:21:29.127 "method": "iobuf_set_options", 00:21:29.127 "params": { 00:21:29.127 "small_pool_count": 8192, 00:21:29.127 "large_pool_count": 1024, 00:21:29.127 "small_bufsize": 8192, 00:21:29.127 "large_bufsize": 135168, 00:21:29.127 "enable_numa": false 00:21:29.127 } 00:21:29.127 } 00:21:29.127 ] 00:21:29.127 }, 00:21:29.127 { 00:21:29.127 "subsystem": "sock", 00:21:29.127 "config": [ 00:21:29.127 { 00:21:29.127 "method": "sock_set_default_impl", 00:21:29.127 "params": { 00:21:29.127 "impl_name": "posix" 00:21:29.127 } 00:21:29.127 }, 00:21:29.127 { 00:21:29.127 "method": "sock_impl_set_options", 00:21:29.127 "params": { 00:21:29.127 "impl_name": "ssl", 00:21:29.127 "recv_buf_size": 4096, 00:21:29.127 "send_buf_size": 4096, 00:21:29.127 "enable_recv_pipe": true, 00:21:29.127 "enable_quickack": false, 00:21:29.127 "enable_placement_id": 0, 00:21:29.127 "enable_zerocopy_send_server": true, 00:21:29.127 "enable_zerocopy_send_client": false, 00:21:29.127 "zerocopy_threshold": 0, 00:21:29.127 "tls_version": 0, 00:21:29.127 "enable_ktls": false 00:21:29.127 } 00:21:29.127 }, 00:21:29.127 { 00:21:29.127 "method": "sock_impl_set_options", 00:21:29.127 "params": { 00:21:29.127 "impl_name": "posix", 00:21:29.127 "recv_buf_size": 2097152, 00:21:29.127 "send_buf_size": 2097152, 00:21:29.127 "enable_recv_pipe": true, 00:21:29.127 "enable_quickack": false, 00:21:29.127 "enable_placement_id": 0, 00:21:29.127 "enable_zerocopy_send_server": true, 00:21:29.127 "enable_zerocopy_send_client": false, 00:21:29.127 "zerocopy_threshold": 0, 00:21:29.127 "tls_version": 0, 00:21:29.127 "enable_ktls": false 00:21:29.127 } 00:21:29.127 } 00:21:29.127 ] 00:21:29.127 }, 00:21:29.127 { 00:21:29.127 "subsystem": "vmd", 00:21:29.127 "config": [] 00:21:29.127 }, 00:21:29.127 { 00:21:29.127 "subsystem": "accel", 00:21:29.127 "config": [ 00:21:29.127 { 00:21:29.127 "method": "accel_set_options", 00:21:29.127 "params": { 00:21:29.127 "small_cache_size": 128, 00:21:29.127 "large_cache_size": 16, 00:21:29.127 "task_count": 2048, 00:21:29.127 "sequence_count": 2048, 00:21:29.127 "buf_count": 2048 00:21:29.127 } 00:21:29.127 } 00:21:29.127 ] 00:21:29.127 }, 00:21:29.127 { 00:21:29.127 "subsystem": "bdev", 00:21:29.127 "config": [ 00:21:29.127 { 00:21:29.127 "method": "bdev_set_options", 00:21:29.127 "params": { 00:21:29.127 "bdev_io_pool_size": 65535, 00:21:29.127 "bdev_io_cache_size": 256, 00:21:29.127 "bdev_auto_examine": true, 00:21:29.127 "iobuf_small_cache_size": 128, 00:21:29.127 "iobuf_large_cache_size": 16 00:21:29.127 } 00:21:29.127 }, 00:21:29.127 { 00:21:29.127 "method": "bdev_raid_set_options", 00:21:29.127 "params": { 00:21:29.127 "process_window_size_kb": 1024, 00:21:29.127 "process_max_bandwidth_mb_sec": 0 00:21:29.127 } 00:21:29.127 }, 00:21:29.127 { 00:21:29.127 "method": "bdev_iscsi_set_options", 00:21:29.127 "params": { 00:21:29.127 "timeout_sec": 30 00:21:29.127 } 00:21:29.127 }, 00:21:29.127 { 00:21:29.127 "method": "bdev_nvme_set_options", 00:21:29.128 "params": { 00:21:29.128 "action_on_timeout": "none", 00:21:29.128 "timeout_us": 0, 00:21:29.128 "timeout_admin_us": 0, 00:21:29.128 "keep_alive_timeout_ms": 10000, 00:21:29.128 "arbitration_burst": 0, 00:21:29.128 "low_priority_weight": 0, 00:21:29.128 "medium_priority_weight": 0, 00:21:29.128 "high_priority_weight": 0, 00:21:29.128 "nvme_adminq_poll_period_us": 10000, 00:21:29.128 "nvme_ioq_poll_period_us": 0, 00:21:29.128 "io_queue_requests": 0, 00:21:29.128 "delay_cmd_submit": true, 00:21:29.128 "transport_retry_count": 4, 00:21:29.128 "bdev_retry_count": 3, 00:21:29.128 "transport_ack_timeout": 0, 00:21:29.128 "ctrlr_loss_timeout_sec": 0, 00:21:29.128 "reconnect_delay_sec": 0, 00:21:29.128 "fast_io_fail_timeout_sec": 0, 00:21:29.128 "disable_auto_failback": false, 00:21:29.128 "generate_uuids": false, 00:21:29.128 "transport_tos": 0, 00:21:29.128 "nvme_error_stat": false, 00:21:29.128 "rdma_srq_size": 0, 00:21:29.128 "io_path_stat": false, 00:21:29.128 "allow_accel_sequence": false, 00:21:29.128 "rdma_max_cq_size": 0, 00:21:29.128 "rdma_cm_event_timeout_ms": 0, 00:21:29.128 "dhchap_digests": [ 00:21:29.128 "sha256", 00:21:29.128 "sha384", 00:21:29.128 "sha512" 00:21:29.128 ], 00:21:29.128 "dhchap_dhgroups": [ 00:21:29.128 "null", 00:21:29.128 "ffdhe2048", 00:21:29.128 "ffdhe3072", 00:21:29.128 "ffdhe4096", 00:21:29.128 "ffdhe6144", 00:21:29.128 "ffdhe8192" 00:21:29.128 ] 00:21:29.128 } 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "method": "bdev_nvme_set_hotplug", 00:21:29.128 "params": { 00:21:29.128 "period_us": 100000, 00:21:29.128 "enable": false 00:21:29.128 } 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "method": "bdev_malloc_create", 00:21:29.128 "params": { 00:21:29.128 "name": "malloc0", 00:21:29.128 "num_blocks": 8192, 00:21:29.128 "block_size": 4096, 00:21:29.128 "physical_block_size": 4096, 00:21:29.128 "uuid": "54af2ac0-e221-4eb8-ab0c-870ba88f8992", 00:21:29.128 "optimal_io_boundary": 0, 00:21:29.128 "md_size": 0, 00:21:29.128 "dif_type": 0, 00:21:29.128 "dif_is_head_of_md": false, 00:21:29.128 "dif_pi_format": 0 00:21:29.128 } 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "method": "bdev_wait_for_examine" 00:21:29.128 } 00:21:29.128 ] 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "subsystem": "nbd", 00:21:29.128 "config": [] 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "subsystem": "scheduler", 00:21:29.128 "config": [ 00:21:29.128 { 00:21:29.128 "method": "framework_set_scheduler", 00:21:29.128 "params": { 00:21:29.128 "name": "static" 00:21:29.128 } 00:21:29.128 } 00:21:29.128 ] 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "subsystem": "nvmf", 00:21:29.128 "config": [ 00:21:29.128 { 00:21:29.128 "method": "nvmf_set_config", 00:21:29.128 "params": { 00:21:29.128 "discovery_filter": "match_any", 00:21:29.128 "admin_cmd_passthru": { 00:21:29.128 "identify_ctrlr": false 00:21:29.128 }, 00:21:29.128 "dhchap_digests": [ 00:21:29.128 "sha256", 00:21:29.128 "sha384", 00:21:29.128 "sha512" 00:21:29.128 ], 00:21:29.128 "dhchap_dhgroups": [ 00:21:29.128 "null", 00:21:29.128 "ffdhe2048", 00:21:29.128 "ffdhe3072", 00:21:29.128 "ffdhe4096", 00:21:29.128 "ffdhe6144", 00:21:29.128 "ffdhe8192" 00:21:29.128 ] 00:21:29.128 } 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "method": "nvmf_set_max_subsystems", 00:21:29.128 "params": { 00:21:29.128 "max_subsystems": 1024 00:21:29.128 } 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "method": "nvmf_set_crdt", 00:21:29.128 "params": { 00:21:29.128 "crdt1": 0, 00:21:29.128 "crdt2": 0, 00:21:29.128 "crdt3": 0 00:21:29.128 } 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "method": "nvmf_create_transport", 00:21:29.128 "params": { 00:21:29.128 "trtype": "TCP", 00:21:29.128 "max_queue_depth": 128, 00:21:29.128 "max_io_qpairs_per_ctrlr": 127, 00:21:29.128 "in_capsule_data_size": 4096, 00:21:29.128 "max_io_size": 131072, 00:21:29.128 "io_unit_size": 131072, 00:21:29.128 "max_aq_depth": 128, 00:21:29.128 "num_shared_buffers": 511, 00:21:29.128 "buf_cache_size": 4294967295, 00:21:29.128 "dif_insert_or_strip": false, 00:21:29.128 "zcopy": false, 00:21:29.128 "c2h_success": false, 00:21:29.128 "sock_priority": 0, 00:21:29.128 "abort_timeout_sec": 1, 00:21:29.128 "ack_timeout": 0, 00:21:29.128 "data_wr_pool_size": 0 00:21:29.128 } 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "method": "nvmf_create_subsystem", 00:21:29.128 "params": { 00:21:29.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.128 "allow_any_host": false, 00:21:29.128 "serial_number": "00000000000000000000", 00:21:29.128 "model_number": "SPDK bdev Controller", 00:21:29.128 "max_namespaces": 32, 00:21:29.128 "min_cntlid": 1, 00:21:29.128 "max_cntlid": 65519, 00:21:29.128 "ana_reporting": false 00:21:29.128 } 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "method": "nvmf_subsystem_add_host", 00:21:29.128 "params": { 00:21:29.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.128 "host": "nqn.2016-06.io.spdk:host1", 00:21:29.128 "psk": "key0" 00:21:29.128 } 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "method": "nvmf_subsystem_add_ns", 00:21:29.128 "params": { 00:21:29.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.128 "namespace": { 00:21:29.128 "nsid": 1, 00:21:29.128 "bdev_name": "malloc0", 00:21:29.128 "nguid": "54AF2AC0E2214EB8AB0C870BA88F8992", 00:21:29.128 "uuid": "54af2ac0-e221-4eb8-ab0c-870ba88f8992", 00:21:29.128 "no_auto_visible": false 00:21:29.128 } 00:21:29.128 } 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "method": "nvmf_subsystem_add_listener", 00:21:29.128 "params": { 00:21:29.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.128 "listen_address": { 00:21:29.128 "trtype": "TCP", 00:21:29.128 "adrfam": "IPv4", 00:21:29.128 "traddr": "10.0.0.2", 00:21:29.128 "trsvcid": "4420" 00:21:29.128 }, 00:21:29.128 "secure_channel": false, 00:21:29.128 "sock_impl": "ssl" 00:21:29.128 } 00:21:29.128 } 00:21:29.128 ] 00:21:29.128 } 00:21:29.128 ] 00:21:29.128 }' 00:21:29.128 12:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:29.408 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:29.408 "subsystems": [ 00:21:29.408 { 00:21:29.408 "subsystem": "keyring", 00:21:29.408 "config": [ 00:21:29.408 { 00:21:29.408 "method": "keyring_file_add_key", 00:21:29.408 "params": { 00:21:29.408 "name": "key0", 00:21:29.408 "path": "/tmp/tmp.8XLMICOfh0" 00:21:29.408 } 00:21:29.408 } 00:21:29.408 ] 00:21:29.408 }, 00:21:29.408 { 00:21:29.408 "subsystem": "iobuf", 00:21:29.408 "config": [ 00:21:29.408 { 00:21:29.408 "method": "iobuf_set_options", 00:21:29.408 "params": { 00:21:29.408 "small_pool_count": 8192, 00:21:29.408 "large_pool_count": 1024, 00:21:29.408 "small_bufsize": 8192, 00:21:29.408 "large_bufsize": 135168, 00:21:29.409 "enable_numa": false 00:21:29.409 } 00:21:29.409 } 00:21:29.409 ] 00:21:29.409 }, 00:21:29.409 { 00:21:29.409 "subsystem": "sock", 00:21:29.409 "config": [ 00:21:29.409 { 00:21:29.409 "method": "sock_set_default_impl", 00:21:29.409 "params": { 00:21:29.409 "impl_name": "posix" 00:21:29.409 } 00:21:29.409 }, 00:21:29.409 { 00:21:29.409 "method": "sock_impl_set_options", 00:21:29.409 "params": { 00:21:29.409 "impl_name": "ssl", 00:21:29.409 "recv_buf_size": 4096, 00:21:29.409 "send_buf_size": 4096, 00:21:29.409 "enable_recv_pipe": true, 00:21:29.409 "enable_quickack": false, 00:21:29.409 "enable_placement_id": 0, 00:21:29.409 "enable_zerocopy_send_server": true, 00:21:29.409 "enable_zerocopy_send_client": false, 00:21:29.409 "zerocopy_threshold": 0, 00:21:29.409 "tls_version": 0, 00:21:29.409 "enable_ktls": false 00:21:29.409 } 00:21:29.409 }, 00:21:29.409 { 00:21:29.409 "method": "sock_impl_set_options", 00:21:29.409 "params": { 00:21:29.409 "impl_name": "posix", 00:21:29.409 "recv_buf_size": 2097152, 00:21:29.409 "send_buf_size": 2097152, 00:21:29.409 "enable_recv_pipe": true, 00:21:29.409 "enable_quickack": false, 00:21:29.409 "enable_placement_id": 0, 00:21:29.409 "enable_zerocopy_send_server": true, 00:21:29.409 "enable_zerocopy_send_client": false, 00:21:29.409 "zerocopy_threshold": 0, 00:21:29.409 "tls_version": 0, 00:21:29.409 "enable_ktls": false 00:21:29.409 } 00:21:29.409 } 00:21:29.409 ] 00:21:29.409 }, 00:21:29.409 { 00:21:29.409 "subsystem": "vmd", 00:21:29.409 "config": [] 00:21:29.409 }, 00:21:29.409 { 00:21:29.409 "subsystem": "accel", 00:21:29.409 "config": [ 00:21:29.409 { 00:21:29.409 "method": "accel_set_options", 00:21:29.409 "params": { 00:21:29.409 "small_cache_size": 128, 00:21:29.409 "large_cache_size": 16, 00:21:29.409 "task_count": 2048, 00:21:29.409 "sequence_count": 2048, 00:21:29.409 "buf_count": 2048 00:21:29.409 } 00:21:29.409 } 00:21:29.409 ] 00:21:29.409 }, 00:21:29.409 { 00:21:29.409 "subsystem": "bdev", 00:21:29.409 "config": [ 00:21:29.409 { 00:21:29.409 "method": "bdev_set_options", 00:21:29.409 "params": { 00:21:29.409 "bdev_io_pool_size": 65535, 00:21:29.409 "bdev_io_cache_size": 256, 00:21:29.409 "bdev_auto_examine": true, 00:21:29.409 "iobuf_small_cache_size": 128, 00:21:29.409 "iobuf_large_cache_size": 16 00:21:29.409 } 00:21:29.409 }, 00:21:29.409 { 00:21:29.409 "method": "bdev_raid_set_options", 00:21:29.409 "params": { 00:21:29.409 "process_window_size_kb": 1024, 00:21:29.409 "process_max_bandwidth_mb_sec": 0 00:21:29.409 } 00:21:29.409 }, 00:21:29.409 { 00:21:29.409 "method": "bdev_iscsi_set_options", 00:21:29.409 "params": { 00:21:29.409 "timeout_sec": 30 00:21:29.409 } 00:21:29.409 }, 00:21:29.409 { 00:21:29.409 "method": "bdev_nvme_set_options", 00:21:29.409 "params": { 00:21:29.409 "action_on_timeout": "none", 00:21:29.409 "timeout_us": 0, 00:21:29.409 "timeout_admin_us": 0, 00:21:29.409 "keep_alive_timeout_ms": 10000, 00:21:29.409 "arbitration_burst": 0, 00:21:29.409 "low_priority_weight": 0, 00:21:29.409 "medium_priority_weight": 0, 00:21:29.409 "high_priority_weight": 0, 00:21:29.409 "nvme_adminq_poll_period_us": 10000, 00:21:29.409 "nvme_ioq_poll_period_us": 0, 00:21:29.409 "io_queue_requests": 512, 00:21:29.409 "delay_cmd_submit": true, 00:21:29.409 "transport_retry_count": 4, 00:21:29.409 "bdev_retry_count": 3, 00:21:29.409 "transport_ack_timeout": 0, 00:21:29.409 "ctrlr_loss_timeout_sec": 0, 00:21:29.409 "reconnect_delay_sec": 0, 00:21:29.409 "fast_io_fail_timeout_sec": 0, 00:21:29.409 "disable_auto_failback": false, 00:21:29.409 "generate_uuids": false, 00:21:29.409 "transport_tos": 0, 00:21:29.409 "nvme_error_stat": false, 00:21:29.409 "rdma_srq_size": 0, 00:21:29.409 "io_path_stat": false, 00:21:29.409 "allow_accel_sequence": false, 00:21:29.409 "rdma_max_cq_size": 0, 00:21:29.409 "rdma_cm_event_timeout_ms": 0, 00:21:29.409 "dhchap_digests": [ 00:21:29.409 "sha256", 00:21:29.409 "sha384", 00:21:29.409 "sha512" 00:21:29.409 ], 00:21:29.409 "dhchap_dhgroups": [ 00:21:29.409 "null", 00:21:29.409 "ffdhe2048", 00:21:29.409 "ffdhe3072", 00:21:29.409 "ffdhe4096", 00:21:29.409 "ffdhe6144", 00:21:29.409 "ffdhe8192" 00:21:29.409 ] 00:21:29.409 } 00:21:29.409 }, 00:21:29.409 { 00:21:29.409 "method": "bdev_nvme_attach_controller", 00:21:29.409 "params": { 00:21:29.409 "name": "nvme0", 00:21:29.409 "trtype": "TCP", 00:21:29.409 "adrfam": "IPv4", 00:21:29.409 "traddr": "10.0.0.2", 00:21:29.409 "trsvcid": "4420", 00:21:29.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.409 "prchk_reftag": false, 00:21:29.409 "prchk_guard": false, 00:21:29.409 "ctrlr_loss_timeout_sec": 0, 00:21:29.409 "reconnect_delay_sec": 0, 00:21:29.409 "fast_io_fail_timeout_sec": 0, 00:21:29.409 "psk": "key0", 00:21:29.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.409 "hdgst": false, 00:21:29.409 "ddgst": false, 00:21:29.409 "multipath": "multipath" 00:21:29.409 } 00:21:29.409 }, 00:21:29.409 { 00:21:29.409 "method": "bdev_nvme_set_hotplug", 00:21:29.409 "params": { 00:21:29.409 "period_us": 100000, 00:21:29.409 "enable": false 00:21:29.410 } 00:21:29.410 }, 00:21:29.410 { 00:21:29.410 "method": "bdev_enable_histogram", 00:21:29.410 "params": { 00:21:29.410 "name": "nvme0n1", 00:21:29.410 "enable": true 00:21:29.410 } 00:21:29.410 }, 00:21:29.410 { 00:21:29.410 "method": "bdev_wait_for_examine" 00:21:29.410 } 00:21:29.410 ] 00:21:29.410 }, 00:21:29.410 { 00:21:29.410 "subsystem": "nbd", 00:21:29.410 "config": [] 00:21:29.410 } 00:21:29.410 ] 00:21:29.410 }' 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 658704 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 658704 ']' 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 658704 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 658704 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 658704' 00:21:29.410 killing process with pid 658704 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 658704 00:21:29.410 Received shutdown signal, test time was about 1.000000 seconds 00:21:29.410 00:21:29.410 Latency(us) 00:21:29.410 [2024-11-25T11:56:09.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.410 [2024-11-25T11:56:09.313Z] =================================================================================================================== 00:21:29.410 [2024-11-25T11:56:09.313Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 658704 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 658374 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 658374 ']' 00:21:29.410 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 658374 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 658374 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 658374' 00:21:29.671 killing process with pid 658374 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 658374 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 658374 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.671 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:29.671 "subsystems": [ 00:21:29.671 { 00:21:29.671 "subsystem": "keyring", 00:21:29.671 "config": [ 00:21:29.671 { 00:21:29.671 "method": "keyring_file_add_key", 00:21:29.671 "params": { 00:21:29.671 "name": "key0", 00:21:29.671 "path": "/tmp/tmp.8XLMICOfh0" 00:21:29.671 } 00:21:29.671 } 00:21:29.671 ] 00:21:29.671 }, 00:21:29.671 { 00:21:29.671 "subsystem": "iobuf", 00:21:29.671 "config": [ 00:21:29.671 { 00:21:29.671 "method": "iobuf_set_options", 00:21:29.671 "params": { 00:21:29.671 "small_pool_count": 8192, 00:21:29.671 "large_pool_count": 1024, 00:21:29.671 "small_bufsize": 8192, 00:21:29.671 "large_bufsize": 135168, 00:21:29.671 "enable_numa": false 00:21:29.671 } 00:21:29.671 } 00:21:29.671 ] 00:21:29.671 }, 00:21:29.671 { 00:21:29.672 "subsystem": "sock", 00:21:29.672 "config": [ 00:21:29.672 { 00:21:29.672 "method": "sock_set_default_impl", 00:21:29.672 "params": { 00:21:29.672 "impl_name": "posix" 00:21:29.672 } 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "method": "sock_impl_set_options", 00:21:29.672 "params": { 00:21:29.672 "impl_name": "ssl", 00:21:29.672 "recv_buf_size": 4096, 00:21:29.672 "send_buf_size": 4096, 00:21:29.672 "enable_recv_pipe": true, 00:21:29.672 "enable_quickack": false, 00:21:29.672 "enable_placement_id": 0, 00:21:29.672 "enable_zerocopy_send_server": true, 00:21:29.672 "enable_zerocopy_send_client": false, 00:21:29.672 "zerocopy_threshold": 0, 00:21:29.672 "tls_version": 0, 00:21:29.672 "enable_ktls": false 00:21:29.672 } 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "method": "sock_impl_set_options", 00:21:29.672 "params": { 00:21:29.672 "impl_name": "posix", 00:21:29.672 "recv_buf_size": 2097152, 00:21:29.672 "send_buf_size": 2097152, 00:21:29.672 "enable_recv_pipe": true, 00:21:29.672 "enable_quickack": false, 00:21:29.672 "enable_placement_id": 0, 00:21:29.672 "enable_zerocopy_send_server": true, 00:21:29.672 "enable_zerocopy_send_client": false, 00:21:29.672 "zerocopy_threshold": 0, 00:21:29.672 "tls_version": 0, 00:21:29.672 "enable_ktls": false 00:21:29.672 } 00:21:29.672 } 00:21:29.672 ] 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "subsystem": "vmd", 00:21:29.672 "config": [] 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "subsystem": "accel", 00:21:29.672 "config": [ 00:21:29.672 { 00:21:29.672 "method": "accel_set_options", 00:21:29.672 "params": { 00:21:29.672 "small_cache_size": 128, 00:21:29.672 "large_cache_size": 16, 00:21:29.672 "task_count": 2048, 00:21:29.672 "sequence_count": 2048, 00:21:29.672 "buf_count": 2048 00:21:29.672 } 00:21:29.672 } 00:21:29.672 ] 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "subsystem": "bdev", 00:21:29.672 "config": [ 00:21:29.672 { 00:21:29.672 "method": "bdev_set_options", 00:21:29.672 "params": { 00:21:29.672 "bdev_io_pool_size": 65535, 00:21:29.672 "bdev_io_cache_size": 256, 00:21:29.672 "bdev_auto_examine": true, 00:21:29.672 "iobuf_small_cache_size": 128, 00:21:29.672 "iobuf_large_cache_size": 16 00:21:29.672 } 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "method": "bdev_raid_set_options", 00:21:29.672 "params": { 00:21:29.672 "process_window_size_kb": 1024, 00:21:29.672 "process_max_bandwidth_mb_sec": 0 00:21:29.672 } 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "method": "bdev_iscsi_set_options", 00:21:29.672 "params": { 00:21:29.672 "timeout_sec": 30 00:21:29.672 } 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "method": "bdev_nvme_set_options", 00:21:29.672 "params": { 00:21:29.672 "action_on_timeout": "none", 00:21:29.672 "timeout_us": 0, 00:21:29.672 "timeout_admin_us": 0, 00:21:29.672 "keep_alive_timeout_ms": 10000, 00:21:29.672 "arbitration_burst": 0, 00:21:29.672 "low_priority_weight": 0, 00:21:29.672 "medium_priority_weight": 0, 00:21:29.672 "high_priority_weight": 0, 00:21:29.672 "nvme_adminq_poll_period_us": 10000, 00:21:29.672 "nvme_ioq_poll_period_us": 0, 00:21:29.672 "io_queue_requests": 0, 00:21:29.672 "delay_cmd_submit": true, 00:21:29.672 "transport_retry_count": 4, 00:21:29.672 "bdev_retry_count": 3, 00:21:29.672 "transport_ack_timeout": 0, 00:21:29.672 "ctrlr_loss_timeout_sec": 0, 00:21:29.672 "reconnect_delay_sec": 0, 00:21:29.672 "fast_io_fail_timeout_sec": 0, 00:21:29.672 "disable_auto_failback": false, 00:21:29.672 "generate_uuids": false, 00:21:29.672 "transport_tos": 0, 00:21:29.672 "nvme_error_stat": false, 00:21:29.672 "rdma_srq_size": 0, 00:21:29.672 "io_path_stat": false, 00:21:29.672 "allow_accel_sequence": false, 00:21:29.672 "rdma_max_cq_size": 0, 00:21:29.672 "rdma_cm_event_timeout_ms": 0, 00:21:29.672 "dhchap_digests": [ 00:21:29.672 "sha256", 00:21:29.672 "sha384", 00:21:29.672 "sha512" 00:21:29.672 ], 00:21:29.672 "dhchap_dhgroups": [ 00:21:29.672 "null", 00:21:29.672 "ffdhe2048", 00:21:29.672 "ffdhe3072", 00:21:29.672 "ffdhe4096", 00:21:29.672 "ffdhe6144", 00:21:29.672 "ffdhe8192" 00:21:29.672 ] 00:21:29.672 } 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "method": "bdev_nvme_set_hotplug", 00:21:29.672 "params": { 00:21:29.672 "period_us": 100000, 00:21:29.672 "enable": false 00:21:29.672 } 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "method": "bdev_malloc_create", 00:21:29.672 "params": { 00:21:29.672 "name": "malloc0", 00:21:29.672 "num_blocks": 8192, 00:21:29.672 "block_size": 4096, 00:21:29.672 "physical_block_size": 4096, 00:21:29.672 "uuid": "54af2ac0-e221-4eb8-ab0c-870ba88f8992", 00:21:29.672 "optimal_io_boundary": 0, 00:21:29.672 "md_size": 0, 00:21:29.672 "dif_type": 0, 00:21:29.672 "dif_is_head_of_md": false, 00:21:29.672 "dif_pi_format": 0 00:21:29.672 } 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "method": "bdev_wait_for_examine" 00:21:29.672 } 00:21:29.672 ] 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "subsystem": "nbd", 00:21:29.672 "config": [] 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "subsystem": "scheduler", 00:21:29.672 "config": [ 00:21:29.672 { 00:21:29.672 "method": "framework_set_scheduler", 00:21:29.672 "params": { 00:21:29.672 "name": "static" 00:21:29.672 } 00:21:29.672 } 00:21:29.672 ] 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "subsystem": "nvmf", 00:21:29.672 "config": [ 00:21:29.672 { 00:21:29.672 "method": "nvmf_set_config", 00:21:29.672 "params": { 00:21:29.672 "discovery_filter": "match_any", 00:21:29.672 "admin_cmd_passthru": { 00:21:29.672 "identify_ctrlr": false 00:21:29.672 }, 00:21:29.672 "dhchap_digests": [ 00:21:29.672 "sha256", 00:21:29.672 "sha384", 00:21:29.672 "sha512" 00:21:29.672 ], 00:21:29.672 "dhchap_dhgroups": [ 00:21:29.672 "null", 00:21:29.672 "ffdhe2048", 00:21:29.672 "ffdhe3072", 00:21:29.672 "ffdhe4096", 00:21:29.672 "ffdhe6144", 00:21:29.672 "ffdhe8192" 00:21:29.672 ] 00:21:29.672 } 00:21:29.672 }, 00:21:29.672 { 00:21:29.672 "method": "nvmf_set_max_subsystems", 00:21:29.672 "params": { 00:21:29.672 "max_subsystems": 1024 00:21:29.672 } 00:21:29.672 }, 00:21:29.672 { 00:21:29.673 "method": "nvmf_set_crdt", 00:21:29.673 "params": { 00:21:29.673 "crdt1": 0, 00:21:29.673 "crdt2": 0, 00:21:29.673 "crdt3": 0 00:21:29.673 } 00:21:29.673 }, 00:21:29.673 { 00:21:29.673 "method": "nvmf_create_transport", 00:21:29.673 "params": { 00:21:29.673 "trtype": "TCP", 00:21:29.673 "max_queue_depth": 128, 00:21:29.673 "max_io_qpairs_per_ctrlr": 127, 00:21:29.673 "in_capsule_data_size": 4096, 00:21:29.673 "max_io_size": 131072, 00:21:29.673 "io_unit_size": 131072, 00:21:29.673 "max_aq_depth": 128, 00:21:29.673 "num_shared_buffers": 511, 00:21:29.673 "buf_cache_size": 4294967295, 00:21:29.673 "dif_insert_or_strip": false, 00:21:29.673 "zcopy": false, 00:21:29.673 "c2h_success": false, 00:21:29.673 "sock_priority": 0, 00:21:29.673 "abort_timeout_sec": 1, 00:21:29.673 "ack_timeout": 0, 00:21:29.673 "data_wr_pool_size": 0 00:21:29.673 } 00:21:29.673 }, 00:21:29.673 { 00:21:29.673 "method": "nvmf_create_subsystem", 00:21:29.673 "params": { 00:21:29.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.673 "allow_any_host": false, 00:21:29.673 "serial_number": "00000000000000000000", 00:21:29.673 "model_number": "SPDK bdev Controller", 00:21:29.673 "max_namespaces": 32, 00:21:29.673 "min_cntlid": 1, 00:21:29.673 "max_cntlid": 65519, 00:21:29.673 "ana_reporting": false 00:21:29.673 } 00:21:29.673 }, 00:21:29.673 { 00:21:29.673 "method": "nvmf_subsystem_add_host", 00:21:29.673 "params": { 00:21:29.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.673 "host": "nqn.2016-06.io.spdk:host1", 00:21:29.673 "psk": "key0" 00:21:29.673 } 00:21:29.673 }, 00:21:29.673 { 00:21:29.673 "method": "nvmf_subsystem_add_ns", 00:21:29.673 "params": { 00:21:29.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.673 "namespace": { 00:21:29.673 "nsid": 1, 00:21:29.673 "bdev_name": "malloc0", 00:21:29.673 "nguid": "54AF2AC0E2214EB8AB0C870BA88F8992", 00:21:29.673 "uuid": "54af2ac0-e221-4eb8-ab0c-870ba88f8992", 00:21:29.673 "no_auto_visible": false 00:21:29.673 } 00:21:29.673 } 00:21:29.673 }, 00:21:29.673 { 00:21:29.673 "method": "nvmf_subsystem_add_listener", 00:21:29.673 "params": { 00:21:29.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.673 "listen_address": { 00:21:29.673 "trtype": "TCP", 00:21:29.673 "adrfam": "IPv4", 00:21:29.673 "traddr": "10.0.0.2", 00:21:29.673 "trsvcid": "4420" 00:21:29.673 }, 00:21:29.673 "secure_channel": false, 00:21:29.673 "sock_impl": "ssl" 00:21:29.673 } 00:21:29.673 } 00:21:29.673 ] 00:21:29.673 } 00:21:29.673 ] 00:21:29.673 }' 00:21:29.673 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=659334 00:21:29.673 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 659334 00:21:29.673 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:29.673 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 659334 ']' 00:21:29.673 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.673 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.673 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.673 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.673 12:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.673 [2024-11-25 12:56:09.571358] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:29.673 [2024-11-25 12:56:09.571418] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.933 [2024-11-25 12:56:09.655966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.933 [2024-11-25 12:56:09.691740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.933 [2024-11-25 12:56:09.691775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.933 [2024-11-25 12:56:09.691784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.933 [2024-11-25 12:56:09.691791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.933 [2024-11-25 12:56:09.691796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.933 [2024-11-25 12:56:09.692389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.194 [2024-11-25 12:56:09.891134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.194 [2024-11-25 12:56:09.923150] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.194 [2024-11-25 12:56:09.923380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.455 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.455 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:30.455 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.455 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.455 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.717 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.717 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=659413 00:21:30.717 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 659413 /var/tmp/bdevperf.sock 00:21:30.717 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 659413 ']' 00:21:30.717 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.717 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.717 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.717 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:30.717 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.717 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.717 12:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:30.717 "subsystems": [ 00:21:30.717 { 00:21:30.717 "subsystem": "keyring", 00:21:30.717 "config": [ 00:21:30.717 { 00:21:30.717 "method": "keyring_file_add_key", 00:21:30.717 "params": { 00:21:30.717 "name": "key0", 00:21:30.717 "path": "/tmp/tmp.8XLMICOfh0" 00:21:30.717 } 00:21:30.717 } 00:21:30.717 ] 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "subsystem": "iobuf", 00:21:30.717 "config": [ 00:21:30.717 { 00:21:30.717 "method": "iobuf_set_options", 00:21:30.717 "params": { 00:21:30.717 "small_pool_count": 8192, 00:21:30.717 "large_pool_count": 1024, 00:21:30.717 "small_bufsize": 8192, 00:21:30.717 "large_bufsize": 135168, 00:21:30.717 "enable_numa": false 00:21:30.717 } 00:21:30.717 } 00:21:30.717 ] 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "subsystem": "sock", 00:21:30.717 "config": [ 00:21:30.717 { 00:21:30.717 "method": "sock_set_default_impl", 00:21:30.717 "params": { 00:21:30.717 "impl_name": "posix" 00:21:30.717 } 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "method": "sock_impl_set_options", 00:21:30.717 "params": { 00:21:30.717 "impl_name": "ssl", 00:21:30.717 "recv_buf_size": 4096, 00:21:30.717 "send_buf_size": 4096, 00:21:30.717 "enable_recv_pipe": true, 00:21:30.717 "enable_quickack": false, 00:21:30.717 "enable_placement_id": 0, 00:21:30.717 "enable_zerocopy_send_server": true, 00:21:30.717 "enable_zerocopy_send_client": false, 00:21:30.717 "zerocopy_threshold": 0, 00:21:30.717 "tls_version": 0, 00:21:30.717 "enable_ktls": false 00:21:30.717 } 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "method": "sock_impl_set_options", 00:21:30.717 "params": { 00:21:30.717 "impl_name": "posix", 00:21:30.717 "recv_buf_size": 2097152, 00:21:30.717 "send_buf_size": 2097152, 00:21:30.717 "enable_recv_pipe": true, 00:21:30.717 "enable_quickack": false, 00:21:30.717 "enable_placement_id": 0, 00:21:30.717 "enable_zerocopy_send_server": true, 00:21:30.717 "enable_zerocopy_send_client": false, 00:21:30.717 "zerocopy_threshold": 0, 00:21:30.717 "tls_version": 0, 00:21:30.717 "enable_ktls": false 00:21:30.717 } 00:21:30.717 } 00:21:30.717 ] 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "subsystem": "vmd", 00:21:30.717 "config": [] 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "subsystem": "accel", 00:21:30.717 "config": [ 00:21:30.717 { 00:21:30.717 "method": "accel_set_options", 00:21:30.717 "params": { 00:21:30.717 "small_cache_size": 128, 00:21:30.717 "large_cache_size": 16, 00:21:30.717 "task_count": 2048, 00:21:30.717 "sequence_count": 2048, 00:21:30.717 "buf_count": 2048 00:21:30.717 } 00:21:30.717 } 00:21:30.717 ] 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "subsystem": "bdev", 00:21:30.717 "config": [ 00:21:30.717 { 00:21:30.717 "method": "bdev_set_options", 00:21:30.717 "params": { 00:21:30.717 "bdev_io_pool_size": 65535, 00:21:30.717 "bdev_io_cache_size": 256, 00:21:30.717 "bdev_auto_examine": true, 00:21:30.717 "iobuf_small_cache_size": 128, 00:21:30.717 "iobuf_large_cache_size": 16 00:21:30.717 } 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "method": "bdev_raid_set_options", 00:21:30.717 "params": { 00:21:30.717 "process_window_size_kb": 1024, 00:21:30.717 "process_max_bandwidth_mb_sec": 0 00:21:30.717 } 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "method": "bdev_iscsi_set_options", 00:21:30.717 "params": { 00:21:30.717 "timeout_sec": 30 00:21:30.717 } 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "method": "bdev_nvme_set_options", 00:21:30.717 "params": { 00:21:30.717 "action_on_timeout": "none", 00:21:30.717 "timeout_us": 0, 00:21:30.717 "timeout_admin_us": 0, 00:21:30.717 "keep_alive_timeout_ms": 10000, 00:21:30.717 "arbitration_burst": 0, 00:21:30.717 "low_priority_weight": 0, 00:21:30.717 "medium_priority_weight": 0, 00:21:30.717 "high_priority_weight": 0, 00:21:30.717 "nvme_adminq_poll_period_us": 10000, 00:21:30.717 "nvme_ioq_poll_period_us": 0, 00:21:30.717 "io_queue_requests": 512, 00:21:30.717 "delay_cmd_submit": true, 00:21:30.717 "transport_retry_count": 4, 00:21:30.717 "bdev_retry_count": 3, 00:21:30.717 "transport_ack_timeout": 0, 00:21:30.717 "ctrlr_loss_timeout_sec": 0, 00:21:30.717 "reconnect_delay_sec": 0, 00:21:30.717 "fast_io_fail_timeout_sec": 0, 00:21:30.717 "disable_auto_failback": false, 00:21:30.717 "generate_uuids": false, 00:21:30.717 "transport_tos": 0, 00:21:30.717 "nvme_error_stat": false, 00:21:30.717 "rdma_srq_size": 0, 00:21:30.717 "io_path_stat": false, 00:21:30.717 "allow_accel_sequence": false, 00:21:30.717 "rdma_max_cq_size": 0, 00:21:30.717 "rdma_cm_event_timeout_ms": 0, 00:21:30.717 "dhchap_digests": [ 00:21:30.717 "sha256", 00:21:30.717 "sha384", 00:21:30.717 "sha512" 00:21:30.717 ], 00:21:30.717 "dhchap_dhgroups": [ 00:21:30.717 "null", 00:21:30.717 "ffdhe2048", 00:21:30.717 "ffdhe3072", 00:21:30.717 "ffdhe4096", 00:21:30.717 "ffdhe6144", 00:21:30.717 "ffdhe8192" 00:21:30.717 ] 00:21:30.717 } 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "method": "bdev_nvme_attach_controller", 00:21:30.717 "params": { 00:21:30.717 "name": "nvme0", 00:21:30.717 "trtype": "TCP", 00:21:30.717 "adrfam": "IPv4", 00:21:30.717 "traddr": "10.0.0.2", 00:21:30.717 "trsvcid": "4420", 00:21:30.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.717 "prchk_reftag": false, 00:21:30.717 "prchk_guard": false, 00:21:30.717 "ctrlr_loss_timeout_sec": 0, 00:21:30.717 "reconnect_delay_sec": 0, 00:21:30.717 "fast_io_fail_timeout_sec": 0, 00:21:30.717 "psk": "key0", 00:21:30.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.717 "hdgst": false, 00:21:30.717 "ddgst": false, 00:21:30.717 "multipath": "multipath" 00:21:30.717 } 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "method": "bdev_nvme_set_hotplug", 00:21:30.717 "params": { 00:21:30.717 "period_us": 100000, 00:21:30.717 "enable": false 00:21:30.717 } 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "method": "bdev_enable_histogram", 00:21:30.717 "params": { 00:21:30.717 "name": "nvme0n1", 00:21:30.717 "enable": true 00:21:30.717 } 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "method": "bdev_wait_for_examine" 00:21:30.717 } 00:21:30.717 ] 00:21:30.717 }, 00:21:30.717 { 00:21:30.717 "subsystem": "nbd", 00:21:30.717 "config": [] 00:21:30.717 } 00:21:30.717 ] 00:21:30.718 }' 00:21:30.718 [2024-11-25 12:56:10.442454] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:30.718 [2024-11-25 12:56:10.442508] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659413 ] 00:21:30.718 [2024-11-25 12:56:10.531842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.718 [2024-11-25 12:56:10.561901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.978 [2024-11-25 12:56:10.696852] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.551 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.551 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:31.551 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.551 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:31.551 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.551 12:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:31.812 Running I/O for 1 seconds... 00:21:32.754 4561.00 IOPS, 17.82 MiB/s 00:21:32.754 Latency(us) 00:21:32.754 [2024-11-25T11:56:12.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.754 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:32.754 Verification LBA range: start 0x0 length 0x2000 00:21:32.754 nvme0n1 : 1.05 4467.94 17.45 0.00 0.00 28054.39 6662.83 101362.35 00:21:32.754 [2024-11-25T11:56:12.657Z] =================================================================================================================== 00:21:32.754 [2024-11-25T11:56:12.657Z] Total : 4467.94 17.45 0.00 0.00 28054.39 6662.83 101362.35 00:21:32.754 { 00:21:32.754 "results": [ 00:21:32.754 { 00:21:32.754 "job": "nvme0n1", 00:21:32.754 "core_mask": "0x2", 00:21:32.754 "workload": "verify", 00:21:32.754 "status": "finished", 00:21:32.754 "verify_range": { 00:21:32.754 "start": 0, 00:21:32.754 "length": 8192 00:21:32.754 }, 00:21:32.754 "queue_depth": 128, 00:21:32.754 "io_size": 4096, 00:21:32.754 "runtime": 1.049477, 00:21:32.754 "iops": 4467.939745225479, 00:21:32.754 "mibps": 17.452889629787027, 00:21:32.754 "io_failed": 0, 00:21:32.754 "io_timeout": 0, 00:21:32.754 "avg_latency_us": 28054.388307386078, 00:21:32.754 "min_latency_us": 6662.826666666667, 00:21:32.754 "max_latency_us": 101362.34666666666 00:21:32.754 } 00:21:32.754 ], 00:21:32.754 "core_count": 1 00:21:32.754 } 00:21:32.754 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:32.754 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:32.754 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:32.754 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:32.754 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:32.754 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:32.754 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:32.754 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:32.755 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:32.755 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:32.755 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:32.755 nvmf_trace.0 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 659413 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 659413 ']' 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 659413 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659413 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659413' 00:21:33.016 killing process with pid 659413 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 659413 00:21:33.016 Received shutdown signal, test time was about 1.000000 seconds 00:21:33.016 00:21:33.016 Latency(us) 00:21:33.016 [2024-11-25T11:56:12.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.016 [2024-11-25T11:56:12.919Z] =================================================================================================================== 00:21:33.016 [2024-11-25T11:56:12.919Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 659413 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:33.016 rmmod nvme_tcp 00:21:33.016 rmmod nvme_fabrics 00:21:33.016 rmmod nvme_keyring 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 659334 ']' 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 659334 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 659334 ']' 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 659334 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.016 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659334 00:21:33.277 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:33.277 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:33.277 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659334' 00:21:33.277 killing process with pid 659334 00:21:33.277 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 659334 00:21:33.277 12:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 659334 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.277 12:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.OmuKEC0pqm /tmp/tmp.SlKbRVhLIu /tmp/tmp.8XLMICOfh0 00:21:35.826 00:21:35.826 real 1m24.330s 00:21:35.826 user 2m9.564s 00:21:35.826 sys 0m27.386s 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.826 ************************************ 00:21:35.826 END TEST nvmf_tls 00:21:35.826 ************************************ 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:35.826 ************************************ 00:21:35.826 START TEST nvmf_fips 00:21:35.826 ************************************ 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:35.826 * Looking for test storage... 00:21:35.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.826 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:35.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.827 --rc genhtml_branch_coverage=1 00:21:35.827 --rc genhtml_function_coverage=1 00:21:35.827 --rc genhtml_legend=1 00:21:35.827 --rc geninfo_all_blocks=1 00:21:35.827 --rc geninfo_unexecuted_blocks=1 00:21:35.827 00:21:35.827 ' 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:35.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.827 --rc genhtml_branch_coverage=1 00:21:35.827 --rc genhtml_function_coverage=1 00:21:35.827 --rc genhtml_legend=1 00:21:35.827 --rc geninfo_all_blocks=1 00:21:35.827 --rc geninfo_unexecuted_blocks=1 00:21:35.827 00:21:35.827 ' 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:35.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.827 --rc genhtml_branch_coverage=1 00:21:35.827 --rc genhtml_function_coverage=1 00:21:35.827 --rc genhtml_legend=1 00:21:35.827 --rc geninfo_all_blocks=1 00:21:35.827 --rc geninfo_unexecuted_blocks=1 00:21:35.827 00:21:35.827 ' 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:35.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.827 --rc genhtml_branch_coverage=1 00:21:35.827 --rc genhtml_function_coverage=1 00:21:35.827 --rc genhtml_legend=1 00:21:35.827 --rc geninfo_all_blocks=1 00:21:35.827 --rc geninfo_unexecuted_blocks=1 00:21:35.827 00:21:35.827 ' 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:35.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:35.827 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:35.828 Error setting digest 00:21:35.828 40F2EADC0F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:35.828 40F2EADC0F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:35.828 12:56:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:43.973 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:43.973 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:43.973 Found net devices under 0000:31:00.0: cvl_0_0 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:43.973 Found net devices under 0000:31:00.1: cvl_0_1 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:43.973 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:44.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:21:44.234 00:21:44.234 --- 10.0.0.2 ping statistics --- 00:21:44.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.234 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:21:44.234 00:21:44.234 --- 10.0.0.1 ping statistics --- 00:21:44.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.234 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:44.234 12:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:44.234 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=664800 00:21:44.234 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 664800 00:21:44.234 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:44.234 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 664800 ']' 00:21:44.234 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.234 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.234 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.234 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.234 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:44.234 [2024-11-25 12:56:24.074693] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:44.234 [2024-11-25 12:56:24.074744] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.494 [2024-11-25 12:56:24.178476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.494 [2024-11-25 12:56:24.217773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.494 [2024-11-25 12:56:24.217816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.494 [2024-11-25 12:56:24.217824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.494 [2024-11-25 12:56:24.217832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.494 [2024-11-25 12:56:24.217838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.494 [2024-11-25 12:56:24.218570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.066 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.066 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:45.066 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:45.066 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.066 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.066 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.066 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:45.066 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:45.067 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:45.067 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.T5h 00:21:45.067 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:45.067 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.T5h 00:21:45.067 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.T5h 00:21:45.067 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.T5h 00:21:45.067 12:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:45.328 [2024-11-25 12:56:25.078592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.328 [2024-11-25 12:56:25.094588] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.328 [2024-11-25 12:56:25.094957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.328 malloc0 00:21:45.328 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.328 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=664909 00:21:45.328 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 664909 /var/tmp/bdevperf.sock 00:21:45.328 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:45.328 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 664909 ']' 00:21:45.328 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.328 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.328 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.328 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.328 12:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:45.328 [2024-11-25 12:56:25.227268] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:21:45.328 [2024-11-25 12:56:25.227344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664909 ] 00:21:45.591 [2024-11-25 12:56:25.299856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.591 [2024-11-25 12:56:25.336442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.162 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.162 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:46.162 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.T5h 00:21:46.422 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:46.682 [2024-11-25 12:56:26.346795] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.682 TLSTESTn1 00:21:46.682 12:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:46.682 Running I/O for 10 seconds... 00:21:49.004 4624.00 IOPS, 18.06 MiB/s [2024-11-25T11:56:29.847Z] 5080.00 IOPS, 19.84 MiB/s [2024-11-25T11:56:30.968Z] 5357.67 IOPS, 20.93 MiB/s [2024-11-25T11:56:31.908Z] 5256.25 IOPS, 20.53 MiB/s [2024-11-25T11:56:32.851Z] 5216.00 IOPS, 20.38 MiB/s [2024-11-25T11:56:33.791Z] 5414.17 IOPS, 21.15 MiB/s [2024-11-25T11:56:34.731Z] 5542.57 IOPS, 21.65 MiB/s [2024-11-25T11:56:35.675Z] 5387.62 IOPS, 21.05 MiB/s [2024-11-25T11:56:36.618Z] 5483.44 IOPS, 21.42 MiB/s [2024-11-25T11:56:36.618Z] 5494.30 IOPS, 21.46 MiB/s 00:21:56.715 Latency(us) 00:21:56.715 [2024-11-25T11:56:36.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.715 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:56.715 Verification LBA range: start 0x0 length 0x2000 00:21:56.715 TLSTESTn1 : 10.02 5497.53 21.47 0.00 0.00 23248.89 4805.97 57671.68 00:21:56.715 [2024-11-25T11:56:36.618Z] =================================================================================================================== 00:21:56.715 [2024-11-25T11:56:36.618Z] Total : 5497.53 21.47 0.00 0.00 23248.89 4805.97 57671.68 00:21:56.715 { 00:21:56.715 "results": [ 00:21:56.715 { 00:21:56.715 "job": "TLSTESTn1", 00:21:56.715 "core_mask": "0x4", 00:21:56.715 "workload": "verify", 00:21:56.715 "status": "finished", 00:21:56.715 "verify_range": { 00:21:56.715 "start": 0, 00:21:56.715 "length": 8192 00:21:56.715 }, 00:21:56.715 "queue_depth": 128, 00:21:56.715 "io_size": 4096, 00:21:56.715 "runtime": 10.016864, 00:21:56.715 "iops": 5497.5289671497985, 00:21:56.715 "mibps": 21.4747225279289, 00:21:56.715 "io_failed": 0, 00:21:56.715 "io_timeout": 0, 00:21:56.715 "avg_latency_us": 23248.89242972325, 00:21:56.715 "min_latency_us": 4805.973333333333, 00:21:56.715 "max_latency_us": 57671.68 00:21:56.715 } 00:21:56.715 ], 00:21:56.715 "core_count": 1 00:21:56.715 } 00:21:56.715 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:56.715 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:56.715 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:56.715 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:56.715 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:56.715 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:56.715 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:56.715 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:56.715 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:56.715 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:56.715 nvmf_trace.0 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 664909 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 664909 ']' 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 664909 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 664909 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 664909' 00:21:56.976 killing process with pid 664909 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 664909 00:21:56.976 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.976 00:21:56.976 Latency(us) 00:21:56.976 [2024-11-25T11:56:36.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.976 [2024-11-25T11:56:36.879Z] =================================================================================================================== 00:21:56.976 [2024-11-25T11:56:36.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 664909 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:56.976 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.237 rmmod nvme_tcp 00:21:57.237 rmmod nvme_fabrics 00:21:57.237 rmmod nvme_keyring 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 664800 ']' 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 664800 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 664800 ']' 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 664800 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 664800 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 664800' 00:21:57.237 killing process with pid 664800 00:21:57.237 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 664800 00:21:57.238 12:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 664800 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.238 12:56:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.T5h 00:21:59.780 00:21:59.780 real 0m23.926s 00:21:59.780 user 0m24.918s 00:21:59.780 sys 0m10.261s 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:59.780 ************************************ 00:21:59.780 END TEST nvmf_fips 00:21:59.780 ************************************ 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:59.780 ************************************ 00:21:59.780 START TEST nvmf_control_msg_list 00:21:59.780 ************************************ 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:59.780 * Looking for test storage... 00:21:59.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.780 --rc genhtml_branch_coverage=1 00:21:59.780 --rc genhtml_function_coverage=1 00:21:59.780 --rc genhtml_legend=1 00:21:59.780 --rc geninfo_all_blocks=1 00:21:59.780 --rc geninfo_unexecuted_blocks=1 00:21:59.780 00:21:59.780 ' 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.780 --rc genhtml_branch_coverage=1 00:21:59.780 --rc genhtml_function_coverage=1 00:21:59.780 --rc genhtml_legend=1 00:21:59.780 --rc geninfo_all_blocks=1 00:21:59.780 --rc geninfo_unexecuted_blocks=1 00:21:59.780 00:21:59.780 ' 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.780 --rc genhtml_branch_coverage=1 00:21:59.780 --rc genhtml_function_coverage=1 00:21:59.780 --rc genhtml_legend=1 00:21:59.780 --rc geninfo_all_blocks=1 00:21:59.780 --rc geninfo_unexecuted_blocks=1 00:21:59.780 00:21:59.780 ' 00:21:59.780 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:59.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.780 --rc genhtml_branch_coverage=1 00:21:59.780 --rc genhtml_function_coverage=1 00:21:59.781 --rc genhtml_legend=1 00:21:59.781 --rc geninfo_all_blocks=1 00:21:59.781 --rc geninfo_unexecuted_blocks=1 00:21:59.781 00:21:59.781 ' 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.781 12:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:07.931 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:07.931 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:07.931 Found net devices under 0000:31:00.0: cvl_0_0 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:07.931 Found net devices under 0000:31:00.1: cvl_0_1 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.931 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.193 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.193 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.193 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.193 12:56:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.193 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.193 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.193 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.193 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:22:08.193 00:22:08.193 --- 10.0.0.2 ping statistics --- 00:22:08.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.193 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:22:08.193 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:22:08.193 00:22:08.193 --- 10.0.0.1 ping statistics --- 00:22:08.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.194 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:22:08.194 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.194 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:08.194 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:08.194 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.194 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:08.194 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:08.194 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.194 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:08.194 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:08.454 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:08.454 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.454 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.454 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.455 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=671906 00:22:08.455 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 671906 00:22:08.455 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:08.455 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 671906 ']' 00:22:08.455 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.455 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.455 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.455 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.455 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:08.455 [2024-11-25 12:56:48.182550] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:22:08.455 [2024-11-25 12:56:48.182622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.455 [2024-11-25 12:56:48.273212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.455 [2024-11-25 12:56:48.313398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.455 [2024-11-25 12:56:48.313436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.455 [2024-11-25 12:56:48.313444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.455 [2024-11-25 12:56:48.313451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.455 [2024-11-25 12:56:48.313457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.455 [2024-11-25 12:56:48.314097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.399 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.399 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:09.399 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.399 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.399 12:56:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:09.399 [2024-11-25 12:56:49.043715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:09.399 Malloc0 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:09.399 [2024-11-25 12:56:49.094570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=672236 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=672237 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=672238 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 672236 00:22:09.399 12:56:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:09.399 [2024-11-25 12:56:49.164909] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:09.399 [2024-11-25 12:56:49.195155] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:09.399 [2024-11-25 12:56:49.195441] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:10.786 Initializing NVMe Controllers 00:22:10.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:10.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:10.786 Initialization complete. Launching workers. 00:22:10.786 ======================================================== 00:22:10.786 Latency(us) 00:22:10.786 Device Information : IOPS MiB/s Average min max 00:22:10.786 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1510.00 5.90 662.29 294.27 1182.25 00:22:10.786 ======================================================== 00:22:10.786 Total : 1510.00 5.90 662.29 294.27 1182.25 00:22:10.786 00:22:10.786 [2024-11-25 12:56:50.259045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc55a0 is same with the state(6) to be set 00:22:10.786 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 672237 00:22:10.786 Initializing NVMe Controllers 00:22:10.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:10.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:10.786 Initialization complete. Launching workers. 00:22:10.786 ======================================================== 00:22:10.786 Latency(us) 00:22:10.786 Device Information : IOPS MiB/s Average min max 00:22:10.787 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40915.80 40791.40 41270.29 00:22:10.787 ======================================================== 00:22:10.787 Total : 25.00 0.10 40915.80 40791.40 41270.29 00:22:10.787 00:22:10.787 [2024-11-25 12:56:50.421824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04170 is same with the state(6) to be set 00:22:10.787 [2024-11-25 12:56:50.421846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a04170 is same with the state(6) to be set 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 672238 00:22:10.787 Initializing NVMe Controllers 00:22:10.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:10.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:10.787 Initialization complete. Launching workers. 00:22:10.787 ======================================================== 00:22:10.787 Latency(us) 00:22:10.787 Device Information : IOPS MiB/s Average min max 00:22:10.787 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1658.00 6.48 603.10 238.65 1263.90 00:22:10.787 ======================================================== 00:22:10.787 Total : 1658.00 6.48 603.10 238.65 1263.90 00:22:10.787 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:10.787 rmmod nvme_tcp 00:22:10.787 rmmod nvme_fabrics 00:22:10.787 rmmod nvme_keyring 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 671906 ']' 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 671906 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 671906 ']' 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 671906 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 671906 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 671906' 00:22:10.787 killing process with pid 671906 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 671906 00:22:10.787 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 671906 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.048 12:56:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.960 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:12.960 00:22:12.960 real 0m13.519s 00:22:12.960 user 0m8.531s 00:22:12.960 sys 0m7.239s 00:22:12.960 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.960 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:12.960 ************************************ 00:22:12.960 END TEST nvmf_control_msg_list 00:22:12.960 ************************************ 00:22:12.960 12:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:12.960 12:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:12.960 12:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.960 12:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:13.222 ************************************ 00:22:13.222 START TEST nvmf_wait_for_buf 00:22:13.222 ************************************ 00:22:13.222 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:13.222 * Looking for test storage... 00:22:13.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:13.222 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:13.222 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:13.222 12:56:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:13.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.222 --rc genhtml_branch_coverage=1 00:22:13.222 --rc genhtml_function_coverage=1 00:22:13.222 --rc genhtml_legend=1 00:22:13.222 --rc geninfo_all_blocks=1 00:22:13.222 --rc geninfo_unexecuted_blocks=1 00:22:13.222 00:22:13.222 ' 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:13.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.222 --rc genhtml_branch_coverage=1 00:22:13.222 --rc genhtml_function_coverage=1 00:22:13.222 --rc genhtml_legend=1 00:22:13.222 --rc geninfo_all_blocks=1 00:22:13.222 --rc geninfo_unexecuted_blocks=1 00:22:13.222 00:22:13.222 ' 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:13.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.222 --rc genhtml_branch_coverage=1 00:22:13.222 --rc genhtml_function_coverage=1 00:22:13.222 --rc genhtml_legend=1 00:22:13.222 --rc geninfo_all_blocks=1 00:22:13.222 --rc geninfo_unexecuted_blocks=1 00:22:13.222 00:22:13.222 ' 00:22:13.222 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:13.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.222 --rc genhtml_branch_coverage=1 00:22:13.222 --rc genhtml_function_coverage=1 00:22:13.222 --rc genhtml_legend=1 00:22:13.223 --rc geninfo_all_blocks=1 00:22:13.223 --rc geninfo_unexecuted_blocks=1 00:22:13.223 00:22:13.223 ' 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:13.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:13.223 12:56:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:23.232 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:23.232 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.232 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:23.233 Found net devices under 0000:31:00.0: cvl_0_0 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:23.233 Found net devices under 0000:31:00.1: cvl_0_1 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:22:23.233 00:22:23.233 --- 10.0.0.2 ping statistics --- 00:22:23.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.233 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:22:23.233 00:22:23.233 --- 10.0.0.1 ping statistics --- 00:22:23.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.233 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=677361 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 677361 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 677361 ']' 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.233 12:57:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.233 [2024-11-25 12:57:01.790303] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:22:23.233 [2024-11-25 12:57:01.790370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.233 [2024-11-25 12:57:01.880845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.233 [2024-11-25 12:57:01.920811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.234 [2024-11-25 12:57:01.920850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.234 [2024-11-25 12:57:01.920858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.234 [2024-11-25 12:57:01.920871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.234 [2024-11-25 12:57:01.920876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.234 [2024-11-25 12:57:01.921508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 Malloc0 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 [2024-11-25 12:57:02.720450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 [2024-11-25 12:57:02.756681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.234 12:57:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:23.234 [2024-11-25 12:57:02.857941] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:24.619 Initializing NVMe Controllers 00:22:24.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:24.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:24.619 Initialization complete. Launching workers. 00:22:24.619 ======================================================== 00:22:24.620 Latency(us) 00:22:24.620 Device Information : IOPS MiB/s Average min max 00:22:24.620 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 166002.37 47871.49 191553.48 00:22:24.620 ======================================================== 00:22:24.620 Total : 25.00 3.12 166002.37 47871.49 191553.48 00:22:24.620 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.620 rmmod nvme_tcp 00:22:24.620 rmmod nvme_fabrics 00:22:24.620 rmmod nvme_keyring 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 677361 ']' 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 677361 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 677361 ']' 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 677361 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 677361 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 677361' 00:22:24.620 killing process with pid 677361 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 677361 00:22:24.620 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 677361 00:22:24.881 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.881 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.881 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.881 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:24.881 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:24.882 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.882 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.882 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.882 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.882 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.882 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.882 12:57:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.795 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.795 00:22:26.795 real 0m13.754s 00:22:26.795 user 0m5.343s 00:22:26.795 sys 0m6.968s 00:22:26.795 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.795 12:57:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:26.795 ************************************ 00:22:26.795 END TEST nvmf_wait_for_buf 00:22:26.795 ************************************ 00:22:26.795 12:57:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:26.795 12:57:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:26.795 12:57:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:26.795 12:57:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:26.795 12:57:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.795 12:57:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:34.938 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:34.938 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:34.938 Found net devices under 0000:31:00.0: cvl_0_0 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:34.938 Found net devices under 0000:31:00.1: cvl_0_1 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.938 12:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:35.201 ************************************ 00:22:35.201 START TEST nvmf_perf_adq 00:22:35.201 ************************************ 00:22:35.201 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:35.201 * Looking for test storage... 00:22:35.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.201 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:35.201 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:35.201 12:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:35.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.201 --rc genhtml_branch_coverage=1 00:22:35.201 --rc genhtml_function_coverage=1 00:22:35.201 --rc genhtml_legend=1 00:22:35.201 --rc geninfo_all_blocks=1 00:22:35.201 --rc geninfo_unexecuted_blocks=1 00:22:35.201 00:22:35.201 ' 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:35.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.201 --rc genhtml_branch_coverage=1 00:22:35.201 --rc genhtml_function_coverage=1 00:22:35.201 --rc genhtml_legend=1 00:22:35.201 --rc geninfo_all_blocks=1 00:22:35.201 --rc geninfo_unexecuted_blocks=1 00:22:35.201 00:22:35.201 ' 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:35.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.201 --rc genhtml_branch_coverage=1 00:22:35.201 --rc genhtml_function_coverage=1 00:22:35.201 --rc genhtml_legend=1 00:22:35.201 --rc geninfo_all_blocks=1 00:22:35.201 --rc geninfo_unexecuted_blocks=1 00:22:35.201 00:22:35.201 ' 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:35.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.201 --rc genhtml_branch_coverage=1 00:22:35.201 --rc genhtml_function_coverage=1 00:22:35.201 --rc genhtml_legend=1 00:22:35.201 --rc geninfo_all_blocks=1 00:22:35.201 --rc geninfo_unexecuted_blocks=1 00:22:35.201 00:22:35.201 ' 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.201 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.202 12:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:43.351 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:43.351 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.351 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:43.352 Found net devices under 0000:31:00.0: cvl_0_0 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:43.352 Found net devices under 0000:31:00.1: cvl_0_1 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:43.352 12:57:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:45.268 12:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:47.184 12:57:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:52.478 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:52.479 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:52.479 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:52.479 Found net devices under 0000:31:00.0: cvl_0_0 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:52.479 Found net devices under 0000:31:00.1: cvl_0_1 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.479 12:57:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.479 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.479 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.479 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:52.479 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.479 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:52.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:22:52.480 00:22:52.480 --- 10.0.0.2 ping statistics --- 00:22:52.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.480 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:22:52.480 00:22:52.480 --- 10.0.0.1 ping statistics --- 00:22:52.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.480 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=689089 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 689089 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 689089 ']' 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.480 12:57:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.480 [2024-11-25 12:57:32.296439] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:22:52.480 [2024-11-25 12:57:32.296505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.741 [2024-11-25 12:57:32.386654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.741 [2024-11-25 12:57:32.429464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.741 [2024-11-25 12:57:32.429502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.741 [2024-11-25 12:57:32.429510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.741 [2024-11-25 12:57:32.429516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.741 [2024-11-25 12:57:32.429522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.741 [2024-11-25 12:57:32.431151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.741 [2024-11-25 12:57:32.431270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.741 [2024-11-25 12:57:32.431426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.741 [2024-11-25 12:57:32.431427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.314 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 [2024-11-25 12:57:33.285366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 Malloc1 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.575 [2024-11-25 12:57:33.358255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=689343 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:53.575 12:57:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:55.490 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:55.490 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.490 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.752 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.752 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:55.752 "tick_rate": 2400000000, 00:22:55.752 "poll_groups": [ 00:22:55.752 { 00:22:55.752 "name": "nvmf_tgt_poll_group_000", 00:22:55.752 "admin_qpairs": 1, 00:22:55.752 "io_qpairs": 1, 00:22:55.752 "current_admin_qpairs": 1, 00:22:55.752 "current_io_qpairs": 1, 00:22:55.752 "pending_bdev_io": 0, 00:22:55.752 "completed_nvme_io": 19631, 00:22:55.752 "transports": [ 00:22:55.752 { 00:22:55.752 "trtype": "TCP" 00:22:55.752 } 00:22:55.752 ] 00:22:55.752 }, 00:22:55.752 { 00:22:55.752 "name": "nvmf_tgt_poll_group_001", 00:22:55.752 "admin_qpairs": 0, 00:22:55.752 "io_qpairs": 1, 00:22:55.752 "current_admin_qpairs": 0, 00:22:55.752 "current_io_qpairs": 1, 00:22:55.752 "pending_bdev_io": 0, 00:22:55.752 "completed_nvme_io": 27828, 00:22:55.752 "transports": [ 00:22:55.752 { 00:22:55.752 "trtype": "TCP" 00:22:55.752 } 00:22:55.752 ] 00:22:55.752 }, 00:22:55.752 { 00:22:55.752 "name": "nvmf_tgt_poll_group_002", 00:22:55.752 "admin_qpairs": 0, 00:22:55.752 "io_qpairs": 1, 00:22:55.752 "current_admin_qpairs": 0, 00:22:55.752 "current_io_qpairs": 1, 00:22:55.752 "pending_bdev_io": 0, 00:22:55.752 "completed_nvme_io": 21751, 00:22:55.752 "transports": [ 00:22:55.752 { 00:22:55.752 "trtype": "TCP" 00:22:55.752 } 00:22:55.752 ] 00:22:55.752 }, 00:22:55.752 { 00:22:55.752 "name": "nvmf_tgt_poll_group_003", 00:22:55.752 "admin_qpairs": 0, 00:22:55.752 "io_qpairs": 1, 00:22:55.752 "current_admin_qpairs": 0, 00:22:55.752 "current_io_qpairs": 1, 00:22:55.752 "pending_bdev_io": 0, 00:22:55.752 "completed_nvme_io": 20023, 00:22:55.752 "transports": [ 00:22:55.752 { 00:22:55.752 "trtype": "TCP" 00:22:55.752 } 00:22:55.752 ] 00:22:55.752 } 00:22:55.752 ] 00:22:55.752 }' 00:22:55.752 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:55.752 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:55.752 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:55.752 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:55.752 12:57:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 689343 00:23:03.948 Initializing NVMe Controllers 00:23:03.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:03.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:03.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:03.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:03.948 Initialization complete. Launching workers. 00:23:03.948 ======================================================== 00:23:03.948 Latency(us) 00:23:03.948 Device Information : IOPS MiB/s Average min max 00:23:03.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11057.20 43.19 5789.31 1486.27 9217.59 00:23:03.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14897.90 58.19 4296.09 1302.74 9623.30 00:23:03.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14239.50 55.62 4494.13 1452.26 11572.49 00:23:03.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13451.60 52.55 4758.00 1481.90 46993.69 00:23:03.948 ======================================================== 00:23:03.948 Total : 53646.19 209.56 4772.25 1302.74 46993.69 00:23:03.948 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:03.948 rmmod nvme_tcp 00:23:03.948 rmmod nvme_fabrics 00:23:03.948 rmmod nvme_keyring 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 689089 ']' 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 689089 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 689089 ']' 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 689089 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 689089 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 689089' 00:23:03.948 killing process with pid 689089 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 689089 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 689089 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.948 12:57:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.525 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.526 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:06.526 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:06.526 12:57:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:07.911 12:57:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:09.825 12:57:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:15.116 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:15.116 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:15.116 Found net devices under 0000:31:00.0: cvl_0_0 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:15.116 Found net devices under 0000:31:00.1: cvl_0_1 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.116 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:23:15.117 00:23:15.117 --- 10.0.0.2 ping statistics --- 00:23:15.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.117 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:23:15.117 00:23:15.117 --- 10.0.0.1 ping statistics --- 00:23:15.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.117 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:15.117 net.core.busy_poll = 1 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:15.117 net.core.busy_read = 1 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:15.117 12:57:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=693916 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 693916 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 693916 ']' 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.377 12:57:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:15.377 [2024-11-25 12:57:55.218350] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:23:15.377 [2024-11-25 12:57:55.218407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.638 [2024-11-25 12:57:55.304351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:15.638 [2024-11-25 12:57:55.342159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.638 [2024-11-25 12:57:55.342194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.638 [2024-11-25 12:57:55.342202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.638 [2024-11-25 12:57:55.342209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.638 [2024-11-25 12:57:55.342215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.638 [2024-11-25 12:57:55.343746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.638 [2024-11-25 12:57:55.343885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.638 [2024-11-25 12:57:55.343945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.638 [2024-11-25 12:57:55.343945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.209 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.469 [2024-11-25 12:57:56.185628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.469 Malloc1 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.469 [2024-11-25 12:57:56.255171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=694137 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:16.469 12:57:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:18.380 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:18.380 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.380 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.641 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.641 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:18.641 "tick_rate": 2400000000, 00:23:18.641 "poll_groups": [ 00:23:18.641 { 00:23:18.641 "name": "nvmf_tgt_poll_group_000", 00:23:18.641 "admin_qpairs": 1, 00:23:18.641 "io_qpairs": 2, 00:23:18.641 "current_admin_qpairs": 1, 00:23:18.641 "current_io_qpairs": 2, 00:23:18.641 "pending_bdev_io": 0, 00:23:18.641 "completed_nvme_io": 27501, 00:23:18.641 "transports": [ 00:23:18.641 { 00:23:18.641 "trtype": "TCP" 00:23:18.641 } 00:23:18.641 ] 00:23:18.641 }, 00:23:18.641 { 00:23:18.641 "name": "nvmf_tgt_poll_group_001", 00:23:18.641 "admin_qpairs": 0, 00:23:18.641 "io_qpairs": 2, 00:23:18.641 "current_admin_qpairs": 0, 00:23:18.641 "current_io_qpairs": 2, 00:23:18.641 "pending_bdev_io": 0, 00:23:18.641 "completed_nvme_io": 40519, 00:23:18.641 "transports": [ 00:23:18.641 { 00:23:18.641 "trtype": "TCP" 00:23:18.641 } 00:23:18.641 ] 00:23:18.641 }, 00:23:18.641 { 00:23:18.641 "name": "nvmf_tgt_poll_group_002", 00:23:18.641 "admin_qpairs": 0, 00:23:18.641 "io_qpairs": 0, 00:23:18.641 "current_admin_qpairs": 0, 00:23:18.641 "current_io_qpairs": 0, 00:23:18.641 "pending_bdev_io": 0, 00:23:18.641 "completed_nvme_io": 0, 00:23:18.641 "transports": [ 00:23:18.641 { 00:23:18.641 "trtype": "TCP" 00:23:18.641 } 00:23:18.641 ] 00:23:18.641 }, 00:23:18.641 { 00:23:18.641 "name": "nvmf_tgt_poll_group_003", 00:23:18.641 "admin_qpairs": 0, 00:23:18.641 "io_qpairs": 0, 00:23:18.641 "current_admin_qpairs": 0, 00:23:18.641 "current_io_qpairs": 0, 00:23:18.641 "pending_bdev_io": 0, 00:23:18.641 "completed_nvme_io": 0, 00:23:18.641 "transports": [ 00:23:18.641 { 00:23:18.641 "trtype": "TCP" 00:23:18.641 } 00:23:18.641 ] 00:23:18.641 } 00:23:18.641 ] 00:23:18.641 }' 00:23:18.641 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:18.641 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:18.641 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:18.641 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:18.641 12:57:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 694137 00:23:26.778 Initializing NVMe Controllers 00:23:26.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:26.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:26.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:26.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:26.779 Initialization complete. Launching workers. 00:23:26.779 ======================================================== 00:23:26.779 Latency(us) 00:23:26.779 Device Information : IOPS MiB/s Average min max 00:23:26.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10548.73 41.21 6068.65 1183.45 49623.72 00:23:26.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10678.73 41.71 5994.08 1140.65 49847.75 00:23:26.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8764.54 34.24 7303.27 1234.22 52118.83 00:23:26.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9668.04 37.77 6635.14 1183.56 54331.92 00:23:26.779 ======================================================== 00:23:26.779 Total : 39660.05 154.92 6459.51 1140.65 54331.92 00:23:26.779 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.779 rmmod nvme_tcp 00:23:26.779 rmmod nvme_fabrics 00:23:26.779 rmmod nvme_keyring 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 693916 ']' 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 693916 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 693916 ']' 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 693916 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 693916 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 693916' 00:23:26.779 killing process with pid 693916 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 693916 00:23:26.779 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 693916 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.039 12:58:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.953 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:28.953 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:28.953 00:23:28.953 real 0m53.966s 00:23:28.953 user 2m50.264s 00:23:28.953 sys 0m12.130s 00:23:28.953 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.953 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.953 ************************************ 00:23:28.953 END TEST nvmf_perf_adq 00:23:28.953 ************************************ 00:23:29.215 12:58:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:29.215 12:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.215 12:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.215 12:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:29.215 ************************************ 00:23:29.215 START TEST nvmf_shutdown 00:23:29.215 ************************************ 00:23:29.215 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:29.215 * Looking for test storage... 00:23:29.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:29.215 12:58:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:29.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.215 --rc genhtml_branch_coverage=1 00:23:29.215 --rc genhtml_function_coverage=1 00:23:29.215 --rc genhtml_legend=1 00:23:29.215 --rc geninfo_all_blocks=1 00:23:29.215 --rc geninfo_unexecuted_blocks=1 00:23:29.215 00:23:29.215 ' 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:29.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.215 --rc genhtml_branch_coverage=1 00:23:29.215 --rc genhtml_function_coverage=1 00:23:29.215 --rc genhtml_legend=1 00:23:29.215 --rc geninfo_all_blocks=1 00:23:29.215 --rc geninfo_unexecuted_blocks=1 00:23:29.215 00:23:29.215 ' 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:29.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.215 --rc genhtml_branch_coverage=1 00:23:29.215 --rc genhtml_function_coverage=1 00:23:29.215 --rc genhtml_legend=1 00:23:29.215 --rc geninfo_all_blocks=1 00:23:29.215 --rc geninfo_unexecuted_blocks=1 00:23:29.215 00:23:29.215 ' 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:29.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.215 --rc genhtml_branch_coverage=1 00:23:29.215 --rc genhtml_function_coverage=1 00:23:29.215 --rc genhtml_legend=1 00:23:29.215 --rc geninfo_all_blocks=1 00:23:29.215 --rc geninfo_unexecuted_blocks=1 00:23:29.215 00:23:29.215 ' 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.215 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:29.476 ************************************ 00:23:29.476 START TEST nvmf_shutdown_tc1 00:23:29.476 ************************************ 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.476 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.477 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.477 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:29.477 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:29.477 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.477 12:58:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:37.622 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:37.622 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:37.622 Found net devices under 0000:31:00.0: cvl_0_0 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.622 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:37.623 Found net devices under 0000:31:00.1: cvl_0_1 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.623 12:58:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:23:37.623 00:23:37.623 --- 10.0.0.2 ping statistics --- 00:23:37.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.623 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:23:37.623 00:23:37.623 --- 10.0.0.1 ping statistics --- 00:23:37.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.623 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=700825 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 700825 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 700825 ']' 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:37.623 12:58:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:37.623 [2024-11-25 12:58:17.401533] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:23:37.623 [2024-11-25 12:58:17.401614] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.623 [2024-11-25 12:58:17.508295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.884 [2024-11-25 12:58:17.561111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.884 [2024-11-25 12:58:17.561166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.884 [2024-11-25 12:58:17.561175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.884 [2024-11-25 12:58:17.561184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.884 [2024-11-25 12:58:17.561191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.884 [2024-11-25 12:58:17.563216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.884 [2024-11-25 12:58:17.563380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:37.884 [2024-11-25 12:58:17.563549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.884 [2024-11-25 12:58:17.563550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:38.455 [2024-11-25 12:58:18.247206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:38.455 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:38.456 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:38.456 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:38.456 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.456 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:38.456 Malloc1 00:23:38.716 [2024-11-25 12:58:18.363094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.716 Malloc2 00:23:38.716 Malloc3 00:23:38.716 Malloc4 00:23:38.716 Malloc5 00:23:38.716 Malloc6 00:23:38.716 Malloc7 00:23:38.978 Malloc8 00:23:38.978 Malloc9 00:23:38.978 Malloc10 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=701143 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 701143 /var/tmp/bdevperf.sock 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 701143 ']' 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.978 { 00:23:38.978 "params": { 00:23:38.978 "name": "Nvme$subsystem", 00:23:38.978 "trtype": "$TEST_TRANSPORT", 00:23:38.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.978 "adrfam": "ipv4", 00:23:38.978 "trsvcid": "$NVMF_PORT", 00:23:38.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.978 "hdgst": ${hdgst:-false}, 00:23:38.978 "ddgst": ${ddgst:-false} 00:23:38.978 }, 00:23:38.978 "method": "bdev_nvme_attach_controller" 00:23:38.978 } 00:23:38.978 EOF 00:23:38.978 )") 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.978 { 00:23:38.978 "params": { 00:23:38.978 "name": "Nvme$subsystem", 00:23:38.978 "trtype": "$TEST_TRANSPORT", 00:23:38.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.978 "adrfam": "ipv4", 00:23:38.978 "trsvcid": "$NVMF_PORT", 00:23:38.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.978 "hdgst": ${hdgst:-false}, 00:23:38.978 "ddgst": ${ddgst:-false} 00:23:38.978 }, 00:23:38.978 "method": "bdev_nvme_attach_controller" 00:23:38.978 } 00:23:38.978 EOF 00:23:38.978 )") 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.978 { 00:23:38.978 "params": { 00:23:38.978 "name": "Nvme$subsystem", 00:23:38.978 "trtype": "$TEST_TRANSPORT", 00:23:38.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.978 "adrfam": "ipv4", 00:23:38.978 "trsvcid": "$NVMF_PORT", 00:23:38.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.978 "hdgst": ${hdgst:-false}, 00:23:38.978 "ddgst": ${ddgst:-false} 00:23:38.978 }, 00:23:38.978 "method": "bdev_nvme_attach_controller" 00:23:38.978 } 00:23:38.978 EOF 00:23:38.978 )") 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.978 { 00:23:38.978 "params": { 00:23:38.978 "name": "Nvme$subsystem", 00:23:38.978 "trtype": "$TEST_TRANSPORT", 00:23:38.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.978 "adrfam": "ipv4", 00:23:38.978 "trsvcid": "$NVMF_PORT", 00:23:38.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.978 "hdgst": ${hdgst:-false}, 00:23:38.978 "ddgst": ${ddgst:-false} 00:23:38.978 }, 00:23:38.978 "method": "bdev_nvme_attach_controller" 00:23:38.978 } 00:23:38.978 EOF 00:23:38.978 )") 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.978 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.978 { 00:23:38.978 "params": { 00:23:38.978 "name": "Nvme$subsystem", 00:23:38.978 "trtype": "$TEST_TRANSPORT", 00:23:38.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "$NVMF_PORT", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.979 "hdgst": ${hdgst:-false}, 00:23:38.979 "ddgst": ${ddgst:-false} 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 } 00:23:38.979 EOF 00:23:38.979 )") 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.979 { 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme$subsystem", 00:23:38.979 "trtype": "$TEST_TRANSPORT", 00:23:38.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "$NVMF_PORT", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.979 "hdgst": ${hdgst:-false}, 00:23:38.979 "ddgst": ${ddgst:-false} 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 } 00:23:38.979 EOF 00:23:38.979 )") 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.979 { 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme$subsystem", 00:23:38.979 "trtype": "$TEST_TRANSPORT", 00:23:38.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "$NVMF_PORT", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.979 "hdgst": ${hdgst:-false}, 00:23:38.979 "ddgst": ${ddgst:-false} 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 } 00:23:38.979 EOF 00:23:38.979 )") 00:23:38.979 [2024-11-25 12:58:18.822560] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:23:38.979 [2024-11-25 12:58:18.822614] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.979 { 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme$subsystem", 00:23:38.979 "trtype": "$TEST_TRANSPORT", 00:23:38.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "$NVMF_PORT", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.979 "hdgst": ${hdgst:-false}, 00:23:38.979 "ddgst": ${ddgst:-false} 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 } 00:23:38.979 EOF 00:23:38.979 )") 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.979 { 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme$subsystem", 00:23:38.979 "trtype": "$TEST_TRANSPORT", 00:23:38.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "$NVMF_PORT", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.979 "hdgst": ${hdgst:-false}, 00:23:38.979 "ddgst": ${ddgst:-false} 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 } 00:23:38.979 EOF 00:23:38.979 )") 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:38.979 { 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme$subsystem", 00:23:38.979 "trtype": "$TEST_TRANSPORT", 00:23:38.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "$NVMF_PORT", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.979 "hdgst": ${hdgst:-false}, 00:23:38.979 "ddgst": ${ddgst:-false} 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 } 00:23:38.979 EOF 00:23:38.979 )") 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:38.979 12:58:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme1", 00:23:38.979 "trtype": "tcp", 00:23:38.979 "traddr": "10.0.0.2", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "4420", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.979 "hdgst": false, 00:23:38.979 "ddgst": false 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 },{ 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme2", 00:23:38.979 "trtype": "tcp", 00:23:38.979 "traddr": "10.0.0.2", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "4420", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:38.979 "hdgst": false, 00:23:38.979 "ddgst": false 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 },{ 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme3", 00:23:38.979 "trtype": "tcp", 00:23:38.979 "traddr": "10.0.0.2", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "4420", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:38.979 "hdgst": false, 00:23:38.979 "ddgst": false 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 },{ 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme4", 00:23:38.979 "trtype": "tcp", 00:23:38.979 "traddr": "10.0.0.2", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "4420", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:38.979 "hdgst": false, 00:23:38.979 "ddgst": false 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 },{ 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme5", 00:23:38.979 "trtype": "tcp", 00:23:38.979 "traddr": "10.0.0.2", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "4420", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:38.979 "hdgst": false, 00:23:38.979 "ddgst": false 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 },{ 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme6", 00:23:38.979 "trtype": "tcp", 00:23:38.979 "traddr": "10.0.0.2", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "4420", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:38.979 "hdgst": false, 00:23:38.979 "ddgst": false 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 },{ 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme7", 00:23:38.979 "trtype": "tcp", 00:23:38.979 "traddr": "10.0.0.2", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "4420", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:38.979 "hdgst": false, 00:23:38.979 "ddgst": false 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 },{ 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme8", 00:23:38.979 "trtype": "tcp", 00:23:38.979 "traddr": "10.0.0.2", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "4420", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:38.979 "hdgst": false, 00:23:38.979 "ddgst": false 00:23:38.979 }, 00:23:38.979 "method": "bdev_nvme_attach_controller" 00:23:38.979 },{ 00:23:38.979 "params": { 00:23:38.979 "name": "Nvme9", 00:23:38.979 "trtype": "tcp", 00:23:38.979 "traddr": "10.0.0.2", 00:23:38.979 "adrfam": "ipv4", 00:23:38.979 "trsvcid": "4420", 00:23:38.979 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:38.979 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:38.979 "hdgst": false, 00:23:38.979 "ddgst": false 00:23:38.980 }, 00:23:38.980 "method": "bdev_nvme_attach_controller" 00:23:38.980 },{ 00:23:38.980 "params": { 00:23:38.980 "name": "Nvme10", 00:23:38.980 "trtype": "tcp", 00:23:38.980 "traddr": "10.0.0.2", 00:23:38.980 "adrfam": "ipv4", 00:23:38.980 "trsvcid": "4420", 00:23:38.980 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:38.980 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:38.980 "hdgst": false, 00:23:38.980 "ddgst": false 00:23:38.980 }, 00:23:38.980 "method": "bdev_nvme_attach_controller" 00:23:38.980 }' 00:23:39.241 [2024-11-25 12:58:18.902192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.241 [2024-11-25 12:58:18.938557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.636 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.636 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:40.636 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:40.636 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.636 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.636 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.636 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 701143 00:23:40.636 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:40.636 12:58:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:41.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 701143 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 700825 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.600 { 00:23:41.600 "params": { 00:23:41.600 "name": "Nvme$subsystem", 00:23:41.600 "trtype": "$TEST_TRANSPORT", 00:23:41.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.600 "adrfam": "ipv4", 00:23:41.600 "trsvcid": "$NVMF_PORT", 00:23:41.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.600 "hdgst": ${hdgst:-false}, 00:23:41.600 "ddgst": ${ddgst:-false} 00:23:41.600 }, 00:23:41.600 "method": "bdev_nvme_attach_controller" 00:23:41.600 } 00:23:41.600 EOF 00:23:41.600 )") 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.600 { 00:23:41.600 "params": { 00:23:41.600 "name": "Nvme$subsystem", 00:23:41.600 "trtype": "$TEST_TRANSPORT", 00:23:41.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.600 "adrfam": "ipv4", 00:23:41.600 "trsvcid": "$NVMF_PORT", 00:23:41.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.600 "hdgst": ${hdgst:-false}, 00:23:41.600 "ddgst": ${ddgst:-false} 00:23:41.600 }, 00:23:41.600 "method": "bdev_nvme_attach_controller" 00:23:41.600 } 00:23:41.600 EOF 00:23:41.600 )") 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.600 { 00:23:41.600 "params": { 00:23:41.600 "name": "Nvme$subsystem", 00:23:41.600 "trtype": "$TEST_TRANSPORT", 00:23:41.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.600 "adrfam": "ipv4", 00:23:41.600 "trsvcid": "$NVMF_PORT", 00:23:41.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.600 "hdgst": ${hdgst:-false}, 00:23:41.600 "ddgst": ${ddgst:-false} 00:23:41.600 }, 00:23:41.600 "method": "bdev_nvme_attach_controller" 00:23:41.600 } 00:23:41.600 EOF 00:23:41.600 )") 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.600 { 00:23:41.600 "params": { 00:23:41.600 "name": "Nvme$subsystem", 00:23:41.600 "trtype": "$TEST_TRANSPORT", 00:23:41.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.600 "adrfam": "ipv4", 00:23:41.600 "trsvcid": "$NVMF_PORT", 00:23:41.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.600 "hdgst": ${hdgst:-false}, 00:23:41.600 "ddgst": ${ddgst:-false} 00:23:41.600 }, 00:23:41.600 "method": "bdev_nvme_attach_controller" 00:23:41.600 } 00:23:41.600 EOF 00:23:41.600 )") 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.600 { 00:23:41.600 "params": { 00:23:41.600 "name": "Nvme$subsystem", 00:23:41.600 "trtype": "$TEST_TRANSPORT", 00:23:41.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.600 "adrfam": "ipv4", 00:23:41.600 "trsvcid": "$NVMF_PORT", 00:23:41.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.600 "hdgst": ${hdgst:-false}, 00:23:41.600 "ddgst": ${ddgst:-false} 00:23:41.600 }, 00:23:41.600 "method": "bdev_nvme_attach_controller" 00:23:41.600 } 00:23:41.600 EOF 00:23:41.600 )") 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.600 { 00:23:41.600 "params": { 00:23:41.600 "name": "Nvme$subsystem", 00:23:41.600 "trtype": "$TEST_TRANSPORT", 00:23:41.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.600 "adrfam": "ipv4", 00:23:41.600 "trsvcid": "$NVMF_PORT", 00:23:41.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.600 "hdgst": ${hdgst:-false}, 00:23:41.600 "ddgst": ${ddgst:-false} 00:23:41.600 }, 00:23:41.600 "method": "bdev_nvme_attach_controller" 00:23:41.600 } 00:23:41.600 EOF 00:23:41.600 )") 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.600 [2024-11-25 12:58:21.308769] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:23:41.600 [2024-11-25 12:58:21.308825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701771 ] 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.600 { 00:23:41.600 "params": { 00:23:41.600 "name": "Nvme$subsystem", 00:23:41.600 "trtype": "$TEST_TRANSPORT", 00:23:41.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.600 "adrfam": "ipv4", 00:23:41.600 "trsvcid": "$NVMF_PORT", 00:23:41.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.600 "hdgst": ${hdgst:-false}, 00:23:41.600 "ddgst": ${ddgst:-false} 00:23:41.600 }, 00:23:41.600 "method": "bdev_nvme_attach_controller" 00:23:41.600 } 00:23:41.600 EOF 00:23:41.600 )") 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.600 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.601 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.601 { 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme$subsystem", 00:23:41.601 "trtype": "$TEST_TRANSPORT", 00:23:41.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "$NVMF_PORT", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.601 "hdgst": ${hdgst:-false}, 00:23:41.601 "ddgst": ${ddgst:-false} 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 } 00:23:41.601 EOF 00:23:41.601 )") 00:23:41.601 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.601 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.601 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.601 { 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme$subsystem", 00:23:41.601 "trtype": "$TEST_TRANSPORT", 00:23:41.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "$NVMF_PORT", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.601 "hdgst": ${hdgst:-false}, 00:23:41.601 "ddgst": ${ddgst:-false} 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 } 00:23:41.601 EOF 00:23:41.601 )") 00:23:41.601 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.601 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.601 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.601 { 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme$subsystem", 00:23:41.601 "trtype": "$TEST_TRANSPORT", 00:23:41.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "$NVMF_PORT", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.601 "hdgst": ${hdgst:-false}, 00:23:41.601 "ddgst": ${ddgst:-false} 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 } 00:23:41.601 EOF 00:23:41.601 )") 00:23:41.601 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.601 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:41.601 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:41.601 12:58:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme1", 00:23:41.601 "trtype": "tcp", 00:23:41.601 "traddr": "10.0.0.2", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "4420", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.601 "hdgst": false, 00:23:41.601 "ddgst": false 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 },{ 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme2", 00:23:41.601 "trtype": "tcp", 00:23:41.601 "traddr": "10.0.0.2", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "4420", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:41.601 "hdgst": false, 00:23:41.601 "ddgst": false 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 },{ 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme3", 00:23:41.601 "trtype": "tcp", 00:23:41.601 "traddr": "10.0.0.2", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "4420", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:41.601 "hdgst": false, 00:23:41.601 "ddgst": false 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 },{ 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme4", 00:23:41.601 "trtype": "tcp", 00:23:41.601 "traddr": "10.0.0.2", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "4420", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:41.601 "hdgst": false, 00:23:41.601 "ddgst": false 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 },{ 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme5", 00:23:41.601 "trtype": "tcp", 00:23:41.601 "traddr": "10.0.0.2", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "4420", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:41.601 "hdgst": false, 00:23:41.601 "ddgst": false 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 },{ 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme6", 00:23:41.601 "trtype": "tcp", 00:23:41.601 "traddr": "10.0.0.2", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "4420", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:41.601 "hdgst": false, 00:23:41.601 "ddgst": false 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 },{ 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme7", 00:23:41.601 "trtype": "tcp", 00:23:41.601 "traddr": "10.0.0.2", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "4420", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:41.601 "hdgst": false, 00:23:41.601 "ddgst": false 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 },{ 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme8", 00:23:41.601 "trtype": "tcp", 00:23:41.601 "traddr": "10.0.0.2", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "4420", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:41.601 "hdgst": false, 00:23:41.601 "ddgst": false 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 },{ 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme9", 00:23:41.601 "trtype": "tcp", 00:23:41.601 "traddr": "10.0.0.2", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "4420", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:41.601 "hdgst": false, 00:23:41.601 "ddgst": false 00:23:41.601 }, 00:23:41.601 "method": "bdev_nvme_attach_controller" 00:23:41.601 },{ 00:23:41.601 "params": { 00:23:41.601 "name": "Nvme10", 00:23:41.601 "trtype": "tcp", 00:23:41.601 "traddr": "10.0.0.2", 00:23:41.601 "adrfam": "ipv4", 00:23:41.601 "trsvcid": "4420", 00:23:41.601 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:41.601 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:41.602 "hdgst": false, 00:23:41.602 "ddgst": false 00:23:41.602 }, 00:23:41.602 "method": "bdev_nvme_attach_controller" 00:23:41.602 }' 00:23:41.602 [2024-11-25 12:58:21.387016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.602 [2024-11-25 12:58:21.423072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.987 Running I/O for 1 seconds... 00:23:43.929 1861.00 IOPS, 116.31 MiB/s 00:23:43.929 Latency(us) 00:23:43.929 [2024-11-25T11:58:23.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.929 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.929 Verification LBA range: start 0x0 length 0x400 00:23:43.929 Nvme1n1 : 1.14 225.27 14.08 0.00 0.00 281254.61 16602.45 242920.11 00:23:43.929 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.929 Verification LBA range: start 0x0 length 0x400 00:23:43.929 Nvme2n1 : 1.14 223.79 13.99 0.00 0.00 278412.37 14854.83 232434.35 00:23:43.929 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.929 Verification LBA range: start 0x0 length 0x400 00:23:43.929 Nvme3n1 : 1.13 225.81 14.11 0.00 0.00 270874.67 18131.63 246415.36 00:23:43.929 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.929 Verification LBA range: start 0x0 length 0x400 00:23:43.929 Nvme4n1 : 1.13 226.53 14.16 0.00 0.00 265332.91 20643.84 241172.48 00:23:43.929 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.929 Verification LBA range: start 0x0 length 0x400 00:23:43.929 Nvme5n1 : 1.17 217.99 13.62 0.00 0.00 271899.95 22282.24 260396.37 00:23:43.929 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.929 Verification LBA range: start 0x0 length 0x400 00:23:43.929 Nvme6n1 : 1.17 273.18 17.07 0.00 0.00 212353.88 16384.00 258648.75 00:23:43.929 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.929 Verification LBA range: start 0x0 length 0x400 00:23:43.929 Nvme7n1 : 1.18 270.69 16.92 0.00 0.00 211435.18 12779.52 270882.13 00:23:43.929 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.929 Verification LBA range: start 0x0 length 0x400 00:23:43.929 Nvme8n1 : 1.15 223.22 13.95 0.00 0.00 250792.53 15073.28 255153.49 00:23:43.929 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.929 Verification LBA range: start 0x0 length 0x400 00:23:43.929 Nvme9n1 : 1.18 273.71 17.11 0.00 0.00 200928.86 3904.85 241172.48 00:23:43.929 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:43.929 Verification LBA range: start 0x0 length 0x400 00:23:43.929 Nvme10n1 : 1.20 267.03 16.69 0.00 0.00 203254.44 9502.72 260396.37 00:23:43.929 [2024-11-25T11:58:23.832Z] =================================================================================================================== 00:23:43.929 [2024-11-25T11:58:23.832Z] Total : 2427.23 151.70 0.00 0.00 241173.06 3904.85 270882.13 00:23:44.190 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:44.190 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:44.190 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:44.190 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:44.190 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:44.190 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.190 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:44.190 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.190 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:44.190 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.190 12:58:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.190 rmmod nvme_tcp 00:23:44.190 rmmod nvme_fabrics 00:23:44.190 rmmod nvme_keyring 00:23:44.190 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.190 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:44.190 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:44.190 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 700825 ']' 00:23:44.190 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 700825 00:23:44.190 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 700825 ']' 00:23:44.190 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 700825 00:23:44.190 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:44.190 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.190 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 700825 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 700825' 00:23:44.450 killing process with pid 700825 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 700825 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 700825 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.450 12:58:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:46.994 00:23:46.994 real 0m17.229s 00:23:46.994 user 0m33.020s 00:23:46.994 sys 0m7.321s 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:46.994 ************************************ 00:23:46.994 END TEST nvmf_shutdown_tc1 00:23:46.994 ************************************ 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:46.994 ************************************ 00:23:46.994 START TEST nvmf_shutdown_tc2 00:23:46.994 ************************************ 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:46.994 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:46.994 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:46.994 Found net devices under 0000:31:00.0: cvl_0_0 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.994 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:46.995 Found net devices under 0000:31:00.1: cvl_0_1 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:23:46.995 00:23:46.995 --- 10.0.0.2 ping statistics --- 00:23:46.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.995 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:23:46.995 00:23:46.995 --- 10.0.0.1 ping statistics --- 00:23:46.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.995 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=702949 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 702949 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 702949 ']' 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.995 12:58:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:47.255 [2024-11-25 12:58:26.952154] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:23:47.255 [2024-11-25 12:58:26.952206] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.255 [2024-11-25 12:58:27.050609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:47.255 [2024-11-25 12:58:27.081607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.255 [2024-11-25 12:58:27.081635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.255 [2024-11-25 12:58:27.081640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.255 [2024-11-25 12:58:27.081645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.255 [2024-11-25 12:58:27.081649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.255 [2024-11-25 12:58:27.083103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.255 [2024-11-25 12:58:27.083266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.255 [2024-11-25 12:58:27.083422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.255 [2024-11-25 12:58:27.083423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.196 [2024-11-25 12:58:27.776479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.196 12:58:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.196 Malloc1 00:23:48.196 [2024-11-25 12:58:27.889815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.196 Malloc2 00:23:48.196 Malloc3 00:23:48.196 Malloc4 00:23:48.196 Malloc5 00:23:48.196 Malloc6 00:23:48.196 Malloc7 00:23:48.457 Malloc8 00:23:48.457 Malloc9 00:23:48.457 Malloc10 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=703261 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 703261 /var/tmp/bdevperf.sock 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 703261 ']' 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:48.457 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.458 { 00:23:48.458 "params": { 00:23:48.458 "name": "Nvme$subsystem", 00:23:48.458 "trtype": "$TEST_TRANSPORT", 00:23:48.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.458 "adrfam": "ipv4", 00:23:48.458 "trsvcid": "$NVMF_PORT", 00:23:48.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.458 "hdgst": ${hdgst:-false}, 00:23:48.458 "ddgst": ${ddgst:-false} 00:23:48.458 }, 00:23:48.458 "method": "bdev_nvme_attach_controller" 00:23:48.458 } 00:23:48.458 EOF 00:23:48.458 )") 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.458 { 00:23:48.458 "params": { 00:23:48.458 "name": "Nvme$subsystem", 00:23:48.458 "trtype": "$TEST_TRANSPORT", 00:23:48.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.458 "adrfam": "ipv4", 00:23:48.458 "trsvcid": "$NVMF_PORT", 00:23:48.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.458 "hdgst": ${hdgst:-false}, 00:23:48.458 "ddgst": ${ddgst:-false} 00:23:48.458 }, 00:23:48.458 "method": "bdev_nvme_attach_controller" 00:23:48.458 } 00:23:48.458 EOF 00:23:48.458 )") 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.458 { 00:23:48.458 "params": { 00:23:48.458 "name": "Nvme$subsystem", 00:23:48.458 "trtype": "$TEST_TRANSPORT", 00:23:48.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.458 "adrfam": "ipv4", 00:23:48.458 "trsvcid": "$NVMF_PORT", 00:23:48.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.458 "hdgst": ${hdgst:-false}, 00:23:48.458 "ddgst": ${ddgst:-false} 00:23:48.458 }, 00:23:48.458 "method": "bdev_nvme_attach_controller" 00:23:48.458 } 00:23:48.458 EOF 00:23:48.458 )") 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.458 { 00:23:48.458 "params": { 00:23:48.458 "name": "Nvme$subsystem", 00:23:48.458 "trtype": "$TEST_TRANSPORT", 00:23:48.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.458 "adrfam": "ipv4", 00:23:48.458 "trsvcid": "$NVMF_PORT", 00:23:48.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.458 "hdgst": ${hdgst:-false}, 00:23:48.458 "ddgst": ${ddgst:-false} 00:23:48.458 }, 00:23:48.458 "method": "bdev_nvme_attach_controller" 00:23:48.458 } 00:23:48.458 EOF 00:23:48.458 )") 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.458 { 00:23:48.458 "params": { 00:23:48.458 "name": "Nvme$subsystem", 00:23:48.458 "trtype": "$TEST_TRANSPORT", 00:23:48.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.458 "adrfam": "ipv4", 00:23:48.458 "trsvcid": "$NVMF_PORT", 00:23:48.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.458 "hdgst": ${hdgst:-false}, 00:23:48.458 "ddgst": ${ddgst:-false} 00:23:48.458 }, 00:23:48.458 "method": "bdev_nvme_attach_controller" 00:23:48.458 } 00:23:48.458 EOF 00:23:48.458 )") 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.458 { 00:23:48.458 "params": { 00:23:48.458 "name": "Nvme$subsystem", 00:23:48.458 "trtype": "$TEST_TRANSPORT", 00:23:48.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.458 "adrfam": "ipv4", 00:23:48.458 "trsvcid": "$NVMF_PORT", 00:23:48.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.458 "hdgst": ${hdgst:-false}, 00:23:48.458 "ddgst": ${ddgst:-false} 00:23:48.458 }, 00:23:48.458 "method": "bdev_nvme_attach_controller" 00:23:48.458 } 00:23:48.458 EOF 00:23:48.458 )") 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.458 [2024-11-25 12:58:28.343473] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:23:48.458 [2024-11-25 12:58:28.343527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid703261 ] 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.458 { 00:23:48.458 "params": { 00:23:48.458 "name": "Nvme$subsystem", 00:23:48.458 "trtype": "$TEST_TRANSPORT", 00:23:48.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.458 "adrfam": "ipv4", 00:23:48.458 "trsvcid": "$NVMF_PORT", 00:23:48.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.458 "hdgst": ${hdgst:-false}, 00:23:48.458 "ddgst": ${ddgst:-false} 00:23:48.458 }, 00:23:48.458 "method": "bdev_nvme_attach_controller" 00:23:48.458 } 00:23:48.458 EOF 00:23:48.458 )") 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.458 { 00:23:48.458 "params": { 00:23:48.458 "name": "Nvme$subsystem", 00:23:48.458 "trtype": "$TEST_TRANSPORT", 00:23:48.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.458 "adrfam": "ipv4", 00:23:48.458 "trsvcid": "$NVMF_PORT", 00:23:48.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.458 "hdgst": ${hdgst:-false}, 00:23:48.458 "ddgst": ${ddgst:-false} 00:23:48.458 }, 00:23:48.458 "method": "bdev_nvme_attach_controller" 00:23:48.458 } 00:23:48.458 EOF 00:23:48.458 )") 00:23:48.458 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.718 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.718 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.718 { 00:23:48.718 "params": { 00:23:48.718 "name": "Nvme$subsystem", 00:23:48.718 "trtype": "$TEST_TRANSPORT", 00:23:48.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.718 "adrfam": "ipv4", 00:23:48.718 "trsvcid": "$NVMF_PORT", 00:23:48.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.718 "hdgst": ${hdgst:-false}, 00:23:48.718 "ddgst": ${ddgst:-false} 00:23:48.718 }, 00:23:48.718 "method": "bdev_nvme_attach_controller" 00:23:48.718 } 00:23:48.718 EOF 00:23:48.718 )") 00:23:48.718 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.718 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.718 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.718 { 00:23:48.718 "params": { 00:23:48.719 "name": "Nvme$subsystem", 00:23:48.719 "trtype": "$TEST_TRANSPORT", 00:23:48.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.719 "adrfam": "ipv4", 00:23:48.719 "trsvcid": "$NVMF_PORT", 00:23:48.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.719 "hdgst": ${hdgst:-false}, 00:23:48.719 "ddgst": ${ddgst:-false} 00:23:48.719 }, 00:23:48.719 "method": "bdev_nvme_attach_controller" 00:23:48.719 } 00:23:48.719 EOF 00:23:48.719 )") 00:23:48.719 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:48.719 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:48.719 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:48.719 12:58:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:48.719 "params": { 00:23:48.719 "name": "Nvme1", 00:23:48.719 "trtype": "tcp", 00:23:48.719 "traddr": "10.0.0.2", 00:23:48.719 "adrfam": "ipv4", 00:23:48.719 "trsvcid": "4420", 00:23:48.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.719 "hdgst": false, 00:23:48.719 "ddgst": false 00:23:48.719 }, 00:23:48.719 "method": "bdev_nvme_attach_controller" 00:23:48.719 },{ 00:23:48.719 "params": { 00:23:48.719 "name": "Nvme2", 00:23:48.719 "trtype": "tcp", 00:23:48.719 "traddr": "10.0.0.2", 00:23:48.719 "adrfam": "ipv4", 00:23:48.719 "trsvcid": "4420", 00:23:48.719 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:48.719 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:48.719 "hdgst": false, 00:23:48.719 "ddgst": false 00:23:48.719 }, 00:23:48.719 "method": "bdev_nvme_attach_controller" 00:23:48.719 },{ 00:23:48.719 "params": { 00:23:48.719 "name": "Nvme3", 00:23:48.719 "trtype": "tcp", 00:23:48.719 "traddr": "10.0.0.2", 00:23:48.719 "adrfam": "ipv4", 00:23:48.719 "trsvcid": "4420", 00:23:48.719 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:48.719 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:48.719 "hdgst": false, 00:23:48.719 "ddgst": false 00:23:48.719 }, 00:23:48.719 "method": "bdev_nvme_attach_controller" 00:23:48.719 },{ 00:23:48.719 "params": { 00:23:48.719 "name": "Nvme4", 00:23:48.719 "trtype": "tcp", 00:23:48.719 "traddr": "10.0.0.2", 00:23:48.719 "adrfam": "ipv4", 00:23:48.719 "trsvcid": "4420", 00:23:48.719 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:48.719 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:48.719 "hdgst": false, 00:23:48.719 "ddgst": false 00:23:48.719 }, 00:23:48.719 "method": "bdev_nvme_attach_controller" 00:23:48.719 },{ 00:23:48.719 "params": { 00:23:48.719 "name": "Nvme5", 00:23:48.719 "trtype": "tcp", 00:23:48.719 "traddr": "10.0.0.2", 00:23:48.719 "adrfam": "ipv4", 00:23:48.719 "trsvcid": "4420", 00:23:48.719 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:48.719 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:48.719 "hdgst": false, 00:23:48.719 "ddgst": false 00:23:48.719 }, 00:23:48.719 "method": "bdev_nvme_attach_controller" 00:23:48.719 },{ 00:23:48.719 "params": { 00:23:48.719 "name": "Nvme6", 00:23:48.719 "trtype": "tcp", 00:23:48.719 "traddr": "10.0.0.2", 00:23:48.719 "adrfam": "ipv4", 00:23:48.719 "trsvcid": "4420", 00:23:48.719 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:48.719 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:48.719 "hdgst": false, 00:23:48.719 "ddgst": false 00:23:48.719 }, 00:23:48.719 "method": "bdev_nvme_attach_controller" 00:23:48.719 },{ 00:23:48.719 "params": { 00:23:48.719 "name": "Nvme7", 00:23:48.719 "trtype": "tcp", 00:23:48.719 "traddr": "10.0.0.2", 00:23:48.719 "adrfam": "ipv4", 00:23:48.719 "trsvcid": "4420", 00:23:48.719 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:48.719 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:48.719 "hdgst": false, 00:23:48.719 "ddgst": false 00:23:48.719 }, 00:23:48.719 "method": "bdev_nvme_attach_controller" 00:23:48.719 },{ 00:23:48.719 "params": { 00:23:48.719 "name": "Nvme8", 00:23:48.719 "trtype": "tcp", 00:23:48.719 "traddr": "10.0.0.2", 00:23:48.719 "adrfam": "ipv4", 00:23:48.719 "trsvcid": "4420", 00:23:48.719 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:48.719 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:48.719 "hdgst": false, 00:23:48.719 "ddgst": false 00:23:48.719 }, 00:23:48.719 "method": "bdev_nvme_attach_controller" 00:23:48.719 },{ 00:23:48.719 "params": { 00:23:48.719 "name": "Nvme9", 00:23:48.719 "trtype": "tcp", 00:23:48.719 "traddr": "10.0.0.2", 00:23:48.719 "adrfam": "ipv4", 00:23:48.719 "trsvcid": "4420", 00:23:48.719 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:48.719 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:48.719 "hdgst": false, 00:23:48.719 "ddgst": false 00:23:48.719 }, 00:23:48.719 "method": "bdev_nvme_attach_controller" 00:23:48.719 },{ 00:23:48.719 "params": { 00:23:48.719 "name": "Nvme10", 00:23:48.719 "trtype": "tcp", 00:23:48.719 "traddr": "10.0.0.2", 00:23:48.719 "adrfam": "ipv4", 00:23:48.719 "trsvcid": "4420", 00:23:48.719 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:48.719 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:48.719 "hdgst": false, 00:23:48.719 "ddgst": false 00:23:48.719 }, 00:23:48.719 "method": "bdev_nvme_attach_controller" 00:23:48.719 }' 00:23:48.719 [2024-11-25 12:58:28.423018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.719 [2024-11-25 12:58:28.459589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.102 Running I/O for 10 seconds... 00:23:50.102 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.102 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:50.102 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:50.102 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.102 12:58:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.363 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.363 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:50.363 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:50.363 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:50.363 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:50.363 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:50.363 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:50.363 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:50.364 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:50.364 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:50.364 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.364 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.364 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.364 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:50.364 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:50.364 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:50.626 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:50.626 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:50.626 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:50.626 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:50.626 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.626 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.626 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.626 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:50.626 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:50.626 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 703261 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 703261 ']' 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 703261 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.888 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 703261 00:23:51.148 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.148 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.148 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 703261' 00:23:51.148 killing process with pid 703261 00:23:51.148 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 703261 00:23:51.148 12:58:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 703261 00:23:51.148 Received shutdown signal, test time was about 0.974213 seconds 00:23:51.148 00:23:51.148 Latency(us) 00:23:51.148 [2024-11-25T11:58:31.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.148 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.148 Verification LBA range: start 0x0 length 0x400 00:23:51.148 Nvme1n1 : 0.95 202.85 12.68 0.00 0.00 311748.84 19879.25 260396.37 00:23:51.148 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.148 Verification LBA range: start 0x0 length 0x400 00:23:51.148 Nvme2n1 : 0.96 270.23 16.89 0.00 0.00 229143.08 1153.71 246415.36 00:23:51.148 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.148 Verification LBA range: start 0x0 length 0x400 00:23:51.148 Nvme3n1 : 0.97 265.14 16.57 0.00 0.00 229166.51 18896.21 244667.73 00:23:51.148 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.148 Verification LBA range: start 0x0 length 0x400 00:23:51.148 Nvme4n1 : 0.96 267.98 16.75 0.00 0.00 221658.24 17148.59 241172.48 00:23:51.148 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.148 Verification LBA range: start 0x0 length 0x400 00:23:51.148 Nvme5n1 : 0.97 264.23 16.51 0.00 0.00 220232.75 17476.27 244667.73 00:23:51.148 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.148 Verification LBA range: start 0x0 length 0x400 00:23:51.148 Nvme6n1 : 0.94 211.81 13.24 0.00 0.00 265629.58 4560.21 246415.36 00:23:51.148 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.148 Verification LBA range: start 0x0 length 0x400 00:23:51.148 Nvme7n1 : 0.96 274.97 17.19 0.00 0.00 200890.99 7700.48 221074.77 00:23:51.148 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.148 Verification LBA range: start 0x0 length 0x400 00:23:51.148 Nvme8n1 : 0.94 205.34 12.83 0.00 0.00 263567.08 17257.81 248162.99 00:23:51.148 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.148 Verification LBA range: start 0x0 length 0x400 00:23:51.148 Nvme9n1 : 0.97 263.05 16.44 0.00 0.00 202561.92 19770.03 228939.09 00:23:51.148 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:51.148 Verification LBA range: start 0x0 length 0x400 00:23:51.148 Nvme10n1 : 0.95 201.73 12.61 0.00 0.00 256825.17 16602.45 265639.25 00:23:51.148 [2024-11-25T11:58:31.051Z] =================================================================================================================== 00:23:51.148 [2024-11-25T11:58:31.051Z] Total : 2427.33 151.71 0.00 0.00 236300.38 1153.71 265639.25 00:23:51.148 12:58:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 702949 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.532 rmmod nvme_tcp 00:23:52.532 rmmod nvme_fabrics 00:23:52.532 rmmod nvme_keyring 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 702949 ']' 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 702949 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 702949 ']' 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 702949 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 702949 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 702949' 00:23:52.532 killing process with pid 702949 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 702949 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 702949 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.532 12:58:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:55.076 00:23:55.076 real 0m8.004s 00:23:55.076 user 0m24.257s 00:23:55.076 sys 0m1.297s 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.076 ************************************ 00:23:55.076 END TEST nvmf_shutdown_tc2 00:23:55.076 ************************************ 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:55.076 ************************************ 00:23:55.076 START TEST nvmf_shutdown_tc3 00:23:55.076 ************************************ 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.076 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:55.077 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:55.077 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:55.077 Found net devices under 0000:31:00.0: cvl_0_0 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:55.077 Found net devices under 0000:31:00.1: cvl_0_1 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.077 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:23:55.078 00:23:55.078 --- 10.0.0.2 ping statistics --- 00:23:55.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.078 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:23:55.078 00:23:55.078 --- 10.0.0.1 ping statistics --- 00:23:55.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.078 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=704556 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 704556 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 704556 ']' 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.078 12:58:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:55.339 [2024-11-25 12:58:34.982682] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:23:55.339 [2024-11-25 12:58:34.982752] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.339 [2024-11-25 12:58:35.086303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.339 [2024-11-25 12:58:35.121166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.339 [2024-11-25 12:58:35.121200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.339 [2024-11-25 12:58:35.121206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.339 [2024-11-25 12:58:35.121211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.339 [2024-11-25 12:58:35.121215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.339 [2024-11-25 12:58:35.122535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.339 [2024-11-25 12:58:35.122690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.339 [2024-11-25 12:58:35.122849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.339 [2024-11-25 12:58:35.122851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:55.908 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.908 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:55.908 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.908 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.908 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.169 [2024-11-25 12:58:35.837205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.169 12:58:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.169 Malloc1 00:23:56.169 [2024-11-25 12:58:35.947816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.169 Malloc2 00:23:56.169 Malloc3 00:23:56.169 Malloc4 00:23:56.429 Malloc5 00:23:56.429 Malloc6 00:23:56.429 Malloc7 00:23:56.429 Malloc8 00:23:56.429 Malloc9 00:23:56.429 Malloc10 00:23:56.429 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.429 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:56.429 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:56.429 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=704860 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 704860 /var/tmp/bdevperf.sock 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 704860 ']' 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.690 { 00:23:56.690 "params": { 00:23:56.690 "name": "Nvme$subsystem", 00:23:56.690 "trtype": "$TEST_TRANSPORT", 00:23:56.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.690 "adrfam": "ipv4", 00:23:56.690 "trsvcid": "$NVMF_PORT", 00:23:56.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.690 "hdgst": ${hdgst:-false}, 00:23:56.690 "ddgst": ${ddgst:-false} 00:23:56.690 }, 00:23:56.690 "method": "bdev_nvme_attach_controller" 00:23:56.690 } 00:23:56.690 EOF 00:23:56.690 )") 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.690 { 00:23:56.690 "params": { 00:23:56.690 "name": "Nvme$subsystem", 00:23:56.690 "trtype": "$TEST_TRANSPORT", 00:23:56.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.690 "adrfam": "ipv4", 00:23:56.690 "trsvcid": "$NVMF_PORT", 00:23:56.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.690 "hdgst": ${hdgst:-false}, 00:23:56.690 "ddgst": ${ddgst:-false} 00:23:56.690 }, 00:23:56.690 "method": "bdev_nvme_attach_controller" 00:23:56.690 } 00:23:56.690 EOF 00:23:56.690 )") 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.690 { 00:23:56.690 "params": { 00:23:56.690 "name": "Nvme$subsystem", 00:23:56.690 "trtype": "$TEST_TRANSPORT", 00:23:56.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.690 "adrfam": "ipv4", 00:23:56.690 "trsvcid": "$NVMF_PORT", 00:23:56.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.690 "hdgst": ${hdgst:-false}, 00:23:56.690 "ddgst": ${ddgst:-false} 00:23:56.690 }, 00:23:56.690 "method": "bdev_nvme_attach_controller" 00:23:56.690 } 00:23:56.690 EOF 00:23:56.690 )") 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.690 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.690 { 00:23:56.690 "params": { 00:23:56.691 "name": "Nvme$subsystem", 00:23:56.691 "trtype": "$TEST_TRANSPORT", 00:23:56.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "$NVMF_PORT", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.691 "hdgst": ${hdgst:-false}, 00:23:56.691 "ddgst": ${ddgst:-false} 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 } 00:23:56.691 EOF 00:23:56.691 )") 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.691 { 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme$subsystem", 00:23:56.691 "trtype": "$TEST_TRANSPORT", 00:23:56.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "$NVMF_PORT", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.691 "hdgst": ${hdgst:-false}, 00:23:56.691 "ddgst": ${ddgst:-false} 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 } 00:23:56.691 EOF 00:23:56.691 )") 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.691 { 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme$subsystem", 00:23:56.691 "trtype": "$TEST_TRANSPORT", 00:23:56.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "$NVMF_PORT", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.691 "hdgst": ${hdgst:-false}, 00:23:56.691 "ddgst": ${ddgst:-false} 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 } 00:23:56.691 EOF 00:23:56.691 )") 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.691 { 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme$subsystem", 00:23:56.691 "trtype": "$TEST_TRANSPORT", 00:23:56.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "$NVMF_PORT", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.691 "hdgst": ${hdgst:-false}, 00:23:56.691 "ddgst": ${ddgst:-false} 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 } 00:23:56.691 EOF 00:23:56.691 )") 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.691 [2024-11-25 12:58:36.410264] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:23:56.691 [2024-11-25 12:58:36.410322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704860 ] 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.691 { 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme$subsystem", 00:23:56.691 "trtype": "$TEST_TRANSPORT", 00:23:56.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "$NVMF_PORT", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.691 "hdgst": ${hdgst:-false}, 00:23:56.691 "ddgst": ${ddgst:-false} 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 } 00:23:56.691 EOF 00:23:56.691 )") 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.691 { 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme$subsystem", 00:23:56.691 "trtype": "$TEST_TRANSPORT", 00:23:56.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "$NVMF_PORT", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.691 "hdgst": ${hdgst:-false}, 00:23:56.691 "ddgst": ${ddgst:-false} 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 } 00:23:56.691 EOF 00:23:56.691 )") 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:56.691 { 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme$subsystem", 00:23:56.691 "trtype": "$TEST_TRANSPORT", 00:23:56.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "$NVMF_PORT", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.691 "hdgst": ${hdgst:-false}, 00:23:56.691 "ddgst": ${ddgst:-false} 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 } 00:23:56.691 EOF 00:23:56.691 )") 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:56.691 12:58:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme1", 00:23:56.691 "trtype": "tcp", 00:23:56.691 "traddr": "10.0.0.2", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "4420", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.691 "hdgst": false, 00:23:56.691 "ddgst": false 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 },{ 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme2", 00:23:56.691 "trtype": "tcp", 00:23:56.691 "traddr": "10.0.0.2", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "4420", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:56.691 "hdgst": false, 00:23:56.691 "ddgst": false 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 },{ 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme3", 00:23:56.691 "trtype": "tcp", 00:23:56.691 "traddr": "10.0.0.2", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "4420", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:56.691 "hdgst": false, 00:23:56.691 "ddgst": false 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 },{ 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme4", 00:23:56.691 "trtype": "tcp", 00:23:56.691 "traddr": "10.0.0.2", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "4420", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:56.691 "hdgst": false, 00:23:56.691 "ddgst": false 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 },{ 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme5", 00:23:56.691 "trtype": "tcp", 00:23:56.691 "traddr": "10.0.0.2", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "4420", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:56.691 "hdgst": false, 00:23:56.691 "ddgst": false 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 },{ 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme6", 00:23:56.691 "trtype": "tcp", 00:23:56.691 "traddr": "10.0.0.2", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "4420", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:56.691 "hdgst": false, 00:23:56.691 "ddgst": false 00:23:56.691 }, 00:23:56.691 "method": "bdev_nvme_attach_controller" 00:23:56.691 },{ 00:23:56.691 "params": { 00:23:56.691 "name": "Nvme7", 00:23:56.691 "trtype": "tcp", 00:23:56.691 "traddr": "10.0.0.2", 00:23:56.691 "adrfam": "ipv4", 00:23:56.691 "trsvcid": "4420", 00:23:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:56.691 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:56.691 "hdgst": false, 00:23:56.691 "ddgst": false 00:23:56.692 }, 00:23:56.692 "method": "bdev_nvme_attach_controller" 00:23:56.692 },{ 00:23:56.692 "params": { 00:23:56.692 "name": "Nvme8", 00:23:56.692 "trtype": "tcp", 00:23:56.692 "traddr": "10.0.0.2", 00:23:56.692 "adrfam": "ipv4", 00:23:56.692 "trsvcid": "4420", 00:23:56.692 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:56.692 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:56.692 "hdgst": false, 00:23:56.692 "ddgst": false 00:23:56.692 }, 00:23:56.692 "method": "bdev_nvme_attach_controller" 00:23:56.692 },{ 00:23:56.692 "params": { 00:23:56.692 "name": "Nvme9", 00:23:56.692 "trtype": "tcp", 00:23:56.692 "traddr": "10.0.0.2", 00:23:56.692 "adrfam": "ipv4", 00:23:56.692 "trsvcid": "4420", 00:23:56.692 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:56.692 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:56.692 "hdgst": false, 00:23:56.692 "ddgst": false 00:23:56.692 }, 00:23:56.692 "method": "bdev_nvme_attach_controller" 00:23:56.692 },{ 00:23:56.692 "params": { 00:23:56.692 "name": "Nvme10", 00:23:56.692 "trtype": "tcp", 00:23:56.692 "traddr": "10.0.0.2", 00:23:56.692 "adrfam": "ipv4", 00:23:56.692 "trsvcid": "4420", 00:23:56.692 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:56.692 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:56.692 "hdgst": false, 00:23:56.692 "ddgst": false 00:23:56.692 }, 00:23:56.692 "method": "bdev_nvme_attach_controller" 00:23:56.692 }' 00:23:56.692 [2024-11-25 12:58:36.489303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.692 [2024-11-25 12:58:36.525694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.606 Running I/O for 10 seconds... 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:59.177 12:58:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 704556 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 704556 ']' 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 704556 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 704556 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 704556' 00:23:59.448 killing process with pid 704556 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 704556 00:23:59.448 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 704556 00:23:59.448 [2024-11-25 12:58:39.332837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.332997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.333184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798450 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.334891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.448 [2024-11-25 12:58:39.334926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.448 [2024-11-25 12:58:39.334936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.448 [2024-11-25 12:58:39.334944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.448 [2024-11-25 12:58:39.334953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.448 [2024-11-25 12:58:39.334960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.448 [2024-11-25 12:58:39.334958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with [2024-11-25 12:58:39.334969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:23:59.448 id:0 cdw10:00000000 cdw11:00000000 00:23:59.448 [2024-11-25 12:58:39.334979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.448 [2024-11-25 12:58:39.334985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.334986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12639c0 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.334992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.334998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with [2024-11-25 12:58:39.335034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:23:59.448 id:0 cdw10:00000000 cdw11:00000000 00:23:59.448 [2024-11-25 12:58:39.335048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.448 [2024-11-25 12:58:39.335053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.448 [2024-11-25 12:58:39.335064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with [2024-11-25 12:58:39.335069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:59.448 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.448 [2024-11-25 12:58:39.335076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-25 12:58:39.335081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:59.448 the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.448 [2024-11-25 12:58:39.335093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-11-25 12:58:39.335099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:59.448 the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.448 [2024-11-25 12:58:39.335106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.448 [2024-11-25 12:58:39.335110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe12f70 is same w[2024-11-25 12:58:39.335115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with ith the state(6) to be set 00:23:59.449 the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.335301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798920 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.336995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798df0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.337810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.337832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.337847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.337856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.337873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.337881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.337890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.337898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.337907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.337915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.337924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.337931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.337944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.337952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.337961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.337969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.337978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.337985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.337995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.338173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.338184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.338192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-25 12:58:39.338193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.338201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.338204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 [2024-11-25 12:58:39.338206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.338213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with [2024-11-25 12:58:39.338212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:59.449 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.449 [2024-11-25 12:58:39.338220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.338224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1[2024-11-25 12:58:39.338225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.449 the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.338233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.449 [2024-11-25 12:58:39.338234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-25 12:58:39.338253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with [2024-11-25 12:58:39.338264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1the state(6) to be set 00:23:59.450 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-25 12:58:39.338313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-11-25 12:58:39.338326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-25 12:58:39.338354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with [2024-11-25 12:58:39.338373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:59.450 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with [2024-11-25 12:58:39.338413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:23:59.450 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with [2024-11-25 12:58:39.338425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1the state(6) to be set 00:23:59.450 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1[2024-11-25 12:58:39.338466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-25 12:58:39.338494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1[2024-11-25 12:58:39.338507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-25 12:58:39.338534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1[2024-11-25 12:58:39.338546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17992e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.338554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.338977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.450 [2024-11-25 12:58:39.338984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.450 [2024-11-25 12:58:39.339795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.339816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.339821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.450 [2024-11-25 12:58:39.339826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.339998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17997b0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.340595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:59.451 [2024-11-25 12:58:39.340627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12639c0 (9): Bad file descriptor 00:23:59.451 [2024-11-25 12:58:39.342112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.451 [2024-11-25 12:58:39.342151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12639c0 with addr=10.0.0.2, port=4420 00:23:59.451 [2024-11-25 12:58:39.342163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12639c0 is same with the state(6) to be set 00:23:59.451 [2024-11-25 12:58:39.342217] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.451 [2024-11-25 12:58:39.342491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12639c0 (9): Bad file descriptor 00:23:59.451 [2024-11-25 12:58:39.342555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.342992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.342999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.451 [2024-11-25 12:58:39.343334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.451 [2024-11-25 12:58:39.343344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.452 [2024-11-25 12:58:39.343650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.452 [2024-11-25 12:58:39.343659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10182b0 is same with the state(6) to be set 00:23:59.452 [2024-11-25 12:58:39.343726] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.452 [2024-11-25 12:58:39.343761] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.452 [2024-11-25 12:58:39.343964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:59.452 [2024-11-25 12:58:39.343979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:59.452 [2024-11-25 12:58:39.343989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:59.452 [2024-11-25 12:58:39.343998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:59.724 [2024-11-25 12:58:39.345418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:59.724 [2024-11-25 12:58:39.345474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe09a60 (9): Bad file descriptor 00:23:59.724 [2024-11-25 12:58:39.345531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe12f70 (9): Bad file descriptor 00:23:59.724 [2024-11-25 12:58:39.345563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.724 [2024-11-25 12:58:39.345581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.724 [2024-11-25 12:58:39.345590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.724 [2024-11-25 12:58:39.345598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.724 [2024-11-25 12:58:39.345606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.724 [2024-11-25 12:58:39.345613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.724 [2024-11-25 12:58:39.345613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.724 [2024-11-25 12:58:39.345627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-25 12:58:39.345632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.724 the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe12af0 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.724 [2024-11-25 12:58:39.345686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.724 [2024-11-25 12:58:39.345692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.724 [2024-11-25 12:58:39.345703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.724 [2024-11-25 12:58:39.345720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.724 [2024-11-25 12:58:39.345728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.724 [2024-11-25 12:58:39.345926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.345930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.345935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.345939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.345944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.345949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.345953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799c80 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.346998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a150 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.725 [2024-11-25 12:58:39.347932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.347998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.348591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.358854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.358893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.358902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.358910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f8a0 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.358961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.358976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.358985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.358993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd28610 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.359057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe10830 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.359419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f0c0 is same with the state(6) to be set 00:23:59.726 [2024-11-25 12:58:39.359521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe12af0 (9): Bad file descriptor 00:23:59.726 [2024-11-25 12:58:39.359550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.726 [2024-11-25 12:58:39.359575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.726 [2024-11-25 12:58:39.359583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.727 [2024-11-25 12:58:39.359590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.727 [2024-11-25 12:58:39.359599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.727 [2024-11-25 12:58:39.359610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.727 [2024-11-25 12:58:39.359622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e910 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.359642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f8a0 (9): Bad file descriptor 00:23:59.727 [2024-11-25 12:58:39.359658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd28610 (9): Bad file descriptor 00:23:59.727 [2024-11-25 12:58:39.359675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe10830 (9): Bad file descriptor 00:23:59.727 [2024-11-25 12:58:39.364332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179a640 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.364991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.727 [2024-11-25 12:58:39.365276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.728 [2024-11-25 12:58:39.365280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.728 [2024-11-25 12:58:39.365285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.728 [2024-11-25 12:58:39.365290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.728 [2024-11-25 12:58:39.365294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179ab10 is same with the state(6) to be set 00:23:59.728 [2024-11-25 12:58:39.377362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.728 [2024-11-25 12:58:39.377405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe09a60 with addr=10.0.0.2, port=4420 00:23:59.728 [2024-11-25 12:58:39.377417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe09a60 is same with the state(6) to be set 00:23:59.728 [2024-11-25 12:58:39.377451] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:59.728 [2024-11-25 12:58:39.377488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.728 [2024-11-25 12:58:39.377498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.728 [2024-11-25 12:58:39.377517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.728 [2024-11-25 12:58:39.377533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:59.728 [2024-11-25 12:58:39.377549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c670 is same with the state(6) to be set 00:23:59.728 [2024-11-25 12:58:39.377584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f0c0 (9): Bad file descriptor 00:23:59.728 [2024-11-25 12:58:39.377605] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:59.728 [2024-11-25 12:58:39.377624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e910 (9): Bad file descriptor 00:23:59.728 [2024-11-25 12:58:39.377658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe09a60 (9): Bad file descriptor 00:23:59.728 [2024-11-25 12:58:39.377705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.377986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.377993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.378003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.378010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.378020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.378027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.378037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.378045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.378055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.378062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.378072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.378079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.378089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.378096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.378106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.378114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.378125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.728 [2024-11-25 12:58:39.378133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.728 [2024-11-25 12:58:39.378142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.729 [2024-11-25 12:58:39.378827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.729 [2024-11-25 12:58:39.378836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1016f80 is same with the state(6) to be set 00:23:59.729 task offset: 24576 on job bdev=Nvme10n1 fails 00:23:59.730 1797.55 IOPS, 112.35 MiB/s [2024-11-25T11:58:39.633Z] [2024-11-25 12:58:39.380367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:59.730 [2024-11-25 12:58:39.380405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:59.730 [2024-11-25 12:58:39.380495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.380991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.380998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.381007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.381015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.381025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.381032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.381042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.381049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.381058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.381066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.381077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.381084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.381094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.381101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.381111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.381118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.381127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.381135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.381144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.730 [2024-11-25 12:58:39.381151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.730 [2024-11-25 12:58:39.381160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.381586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.381594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b8090 is same with the state(6) to be set 00:23:59.731 [2024-11-25 12:58:39.382880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.382894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.382908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.382917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.382929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.382938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.382949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.382958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.382969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.382978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.382989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.382997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.383009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.383016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.383025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.383033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.383042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.383049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.383058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.383066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.383075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.383082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.383091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.383099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.383108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.731 [2024-11-25 12:58:39.383116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.731 [2024-11-25 12:58:39.383125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.732 [2024-11-25 12:58:39.383784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.732 [2024-11-25 12:58:39.383794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.383801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.383811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.383818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.383827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.383834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.383845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.383852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.383865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.383872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.383881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.383889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.383898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.383905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.383914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.383921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.383930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.383938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.383947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.383954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.383963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.383970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.383978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b94d0 is same with the state(6) to be set 00:23:59.733 [2024-11-25 12:58:39.385248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.733 [2024-11-25 12:58:39.385673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.733 [2024-11-25 12:58:39.385683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.385990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.385997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.734 [2024-11-25 12:58:39.386334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.734 [2024-11-25 12:58:39.386343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.386350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.387986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.387993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.735 [2024-11-25 12:58:39.388353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.735 [2024-11-25 12:58:39.388360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.388770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.388777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.390115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:59.736 [2024-11-25 12:58:39.390137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:59.736 [2024-11-25 12:58:39.390489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.736 [2024-11-25 12:58:39.390505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12639c0 with addr=10.0.0.2, port=4420 00:23:59.736 [2024-11-25 12:58:39.390518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12639c0 is same with the state(6) to be set 00:23:59.736 [2024-11-25 12:58:39.391117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.736 [2024-11-25 12:58:39.391157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe12f70 with addr=10.0.0.2, port=4420 00:23:59.736 [2024-11-25 12:58:39.391169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe12f70 is same with the state(6) to be set 00:23:59.736 [2024-11-25 12:58:39.391180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:59.736 [2024-11-25 12:58:39.391188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:59.736 [2024-11-25 12:58:39.391197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:59.736 [2024-11-25 12:58:39.391208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:59.736 [2024-11-25 12:58:39.391241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126c670 (9): Bad file descriptor 00:23:59.736 [2024-11-25 12:58:39.391277] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:59.736 [2024-11-25 12:58:39.391296] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:59.736 [2024-11-25 12:58:39.391671] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.736 [2024-11-25 12:58:39.391787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:59.736 [2024-11-25 12:58:39.391804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:59.736 [2024-11-25 12:58:39.392324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.736 [2024-11-25 12:58:39.392361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe10830 with addr=10.0.0.2, port=4420 00:23:59.736 [2024-11-25 12:58:39.392374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe10830 is same with the state(6) to be set 00:23:59.736 [2024-11-25 12:58:39.392728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.736 [2024-11-25 12:58:39.392740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f8a0 with addr=10.0.0.2, port=4420 00:23:59.736 [2024-11-25 12:58:39.392747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f8a0 is same with the state(6) to be set 00:23:59.736 [2024-11-25 12:58:39.392759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12639c0 (9): Bad file descriptor 00:23:59.736 [2024-11-25 12:58:39.392770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe12f70 (9): Bad file descriptor 00:23:59.736 [2024-11-25 12:58:39.392785] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:59.736 [2024-11-25 12:58:39.392799] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:59.736 [2024-11-25 12:58:39.393669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.736 [2024-11-25 12:58:39.393683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.736 [2024-11-25 12:58:39.393700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.393990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.393999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.737 [2024-11-25 12:58:39.394308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.737 [2024-11-25 12:58:39.394317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.394758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.394766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1215210 is same with the state(6) to be set 00:23:59.738 [2024-11-25 12:58:39.396339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.396356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.396367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.396375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.396385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.396392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.396402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.396409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.396419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.396426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.396435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.396442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.396453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.396460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.396469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.396477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.396486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.396493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.396502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.396509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.396518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.738 [2024-11-25 12:58:39.396526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.738 [2024-11-25 12:58:39.396535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.396982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.396991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.739 [2024-11-25 12:58:39.397203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.739 [2024-11-25 12:58:39.397210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.397422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.740 [2024-11-25 12:58:39.397429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.740 [2024-11-25 12:58:39.398726] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:59.740 [2024-11-25 12:58:39.398754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:59.740 [2024-11-25 12:58:39.398770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:59.740 [2024-11-25 12:58:39.398782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:59.740 [2024-11-25 12:58:39.399158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.740 [2024-11-25 12:58:39.399196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe12af0 with addr=10.0.0.2, port=4420 00:23:59.740 [2024-11-25 12:58:39.399208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe12af0 is same with the state(6) to be set 00:23:59.740 [2024-11-25 12:58:39.399424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.740 [2024-11-25 12:58:39.399436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd28610 with addr=10.0.0.2, port=4420 00:23:59.740 [2024-11-25 12:58:39.399443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd28610 is same with the state(6) to be set 00:23:59.740 [2024-11-25 12:58:39.399455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe10830 (9): Bad file descriptor 00:23:59.740 [2024-11-25 12:58:39.399466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f8a0 (9): Bad file descriptor 00:23:59.740 [2024-11-25 12:58:39.399476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:59.740 [2024-11-25 12:58:39.399483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:59.740 [2024-11-25 12:58:39.399492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:59.740 [2024-11-25 12:58:39.399501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:59.740 [2024-11-25 12:58:39.399511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:59.740 [2024-11-25 12:58:39.399517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:59.740 [2024-11-25 12:58:39.399524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:59.740 [2024-11-25 12:58:39.399531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:59.740 [2024-11-25 12:58:39.399870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.740 [2024-11-25 12:58:39.399884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe09a60 with addr=10.0.0.2, port=4420 00:23:59.740 [2024-11-25 12:58:39.399892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe09a60 is same with the state(6) to be set 00:23:59.740 [2024-11-25 12:58:39.400248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.740 [2024-11-25 12:58:39.400258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e910 with addr=10.0.0.2, port=4420 00:23:59.740 [2024-11-25 12:58:39.400266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e910 is same with the state(6) to be set 00:23:59.740 [2024-11-25 12:58:39.400607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.740 [2024-11-25 12:58:39.400621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f0c0 with addr=10.0.0.2, port=4420 00:23:59.740 [2024-11-25 12:58:39.400628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f0c0 is same with the state(6) to be set 00:23:59.740 [2024-11-25 12:58:39.400638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe12af0 (9): Bad file descriptor 00:23:59.740 [2024-11-25 12:58:39.400647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd28610 (9): Bad file descriptor 00:23:59.740 [2024-11-25 12:58:39.400656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:59.740 [2024-11-25 12:58:39.400662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:59.740 [2024-11-25 12:58:39.400669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:59.740 [2024-11-25 12:58:39.400676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:59.740 [2024-11-25 12:58:39.400683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:59.740 [2024-11-25 12:58:39.400690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:59.740 [2024-11-25 12:58:39.400696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:59.740 [2024-11-25 12:58:39.400703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:59.740 [2024-11-25 12:58:39.401288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe09a60 (9): Bad file descriptor 00:23:59.740 [2024-11-25 12:58:39.401303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e910 (9): Bad file descriptor 00:23:59.740 [2024-11-25 12:58:39.401313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f0c0 (9): Bad file descriptor 00:23:59.740 [2024-11-25 12:58:39.401321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:59.740 [2024-11-25 12:58:39.401328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:59.740 [2024-11-25 12:58:39.401335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:59.740 [2024-11-25 12:58:39.401341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:59.740 [2024-11-25 12:58:39.401349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:59.740 [2024-11-25 12:58:39.401355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:59.740 [2024-11-25 12:58:39.401362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:59.740 [2024-11-25 12:58:39.401368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:59.740 [2024-11-25 12:58:39.401437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:59.740 [2024-11-25 12:58:39.401446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:59.740 [2024-11-25 12:58:39.401453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:59.740 [2024-11-25 12:58:39.401460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:59.740 [2024-11-25 12:58:39.401467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:59.740 [2024-11-25 12:58:39.401477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:59.740 [2024-11-25 12:58:39.401483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:59.740 [2024-11-25 12:58:39.401490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:59.740 [2024-11-25 12:58:39.401497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:59.741 [2024-11-25 12:58:39.401503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:59.741 [2024-11-25 12:58:39.401510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:59.741 [2024-11-25 12:58:39.401517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:59.741 [2024-11-25 12:58:39.401570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.401989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.401996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.741 [2024-11-25 12:58:39.402220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.741 [2024-11-25 12:58:39.402229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.742 [2024-11-25 12:58:39.402654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.742 [2024-11-25 12:58:39.402662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1219390 is same with the state(6) to be set 00:23:59.742 [2024-11-25 12:58:39.404305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:59.742 [2024-11-25 12:58:39.404330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:59.742 00:23:59.742 Latency(us) 00:23:59.742 [2024-11-25T11:58:39.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.742 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.742 Job: Nvme1n1 ended in about 1.03 seconds with error 00:23:59.742 Verification LBA range: start 0x0 length 0x400 00:23:59.742 Nvme1n1 : 1.03 123.90 7.74 61.95 0.00 340856.60 18896.21 286610.77 00:23:59.742 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.742 Job: Nvme2n1 ended in about 1.00 seconds with error 00:23:59.742 Verification LBA range: start 0x0 length 0x400 00:23:59.742 Nvme2n1 : 1.00 192.33 12.02 64.11 0.00 242200.43 9065.81 246415.36 00:23:59.742 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.742 Job: Nvme3n1 ended in about 1.04 seconds with error 00:23:59.742 Verification LBA range: start 0x0 length 0x400 00:23:59.742 Nvme3n1 : 1.04 185.37 11.59 61.79 0.00 246807.04 21845.33 249910.61 00:23:59.742 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.742 Job: Nvme4n1 ended in about 1.04 seconds with error 00:23:59.742 Verification LBA range: start 0x0 length 0x400 00:23:59.742 Nvme4n1 : 1.04 184.94 11.56 61.65 0.00 242660.05 15947.09 242920.11 00:23:59.742 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.742 Job: Nvme5n1 ended in about 1.04 seconds with error 00:23:59.742 Verification LBA range: start 0x0 length 0x400 00:23:59.742 Nvme5n1 : 1.04 188.36 11.77 61.51 0.00 234830.82 19551.57 248162.99 00:23:59.742 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.742 Job: Nvme6n1 ended in about 1.05 seconds with error 00:23:59.742 Verification LBA range: start 0x0 length 0x400 00:23:59.742 Nvme6n1 : 1.05 122.03 7.63 61.01 0.00 314714.74 17476.27 281367.89 00:23:59.742 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.743 Job: Nvme7n1 ended in about 1.04 seconds with error 00:23:59.743 Verification LBA range: start 0x0 length 0x400 00:23:59.743 Nvme7n1 : 1.04 184.09 11.51 61.36 0.00 229706.45 18022.40 249910.61 00:23:59.743 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.743 Job: Nvme8n1 ended in about 1.05 seconds with error 00:23:59.743 Verification LBA range: start 0x0 length 0x400 00:23:59.743 Nvme8n1 : 1.05 182.58 11.41 60.86 0.00 227136.85 21954.56 251658.24 00:23:59.743 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.743 Job: Nvme9n1 ended in about 1.06 seconds with error 00:23:59.743 Verification LBA range: start 0x0 length 0x400 00:23:59.743 Nvme9n1 : 1.06 181.66 11.35 60.55 0.00 223673.60 18131.63 270882.13 00:23:59.743 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:59.743 Job: Nvme10n1 ended in about 0.99 seconds with error 00:23:59.743 Verification LBA range: start 0x0 length 0x400 00:23:59.743 Nvme10n1 : 0.99 193.23 12.08 64.41 0.00 203170.19 2471.25 251658.24 00:23:59.743 [2024-11-25T11:58:39.646Z] =================================================================================================================== 00:23:59.743 [2024-11-25T11:58:39.646Z] Total : 1738.48 108.66 619.20 0.00 246492.81 2471.25 286610.77 00:23:59.743 [2024-11-25 12:58:39.429800] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:59.743 [2024-11-25 12:58:39.429851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:59.743 [2024-11-25 12:58:39.430430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.743 [2024-11-25 12:58:39.430453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f8a0 with addr=10.0.0.2, port=4420 00:23:59.743 [2024-11-25 12:58:39.430463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f8a0 is same with the state(6) to be set 00:23:59.743 [2024-11-25 12:58:39.430794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.743 [2024-11-25 12:58:39.430804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe10830 with addr=10.0.0.2, port=4420 00:23:59.743 [2024-11-25 12:58:39.430811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe10830 is same with the state(6) to be set 00:23:59.743 [2024-11-25 12:58:39.431132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.743 [2024-11-25 12:58:39.431143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126c670 with addr=10.0.0.2, port=4420 00:23:59.743 [2024-11-25 12:58:39.431151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126c670 is same with the state(6) to be set 00:23:59.743 [2024-11-25 12:58:39.431190] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:59.743 [2024-11-25 12:58:39.431202] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:59.743 [2024-11-25 12:58:39.431213] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:59.743 [2024-11-25 12:58:39.431225] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:59.743 [2024-11-25 12:58:39.431521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:59.743 [2024-11-25 12:58:39.431532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:59.743 [2024-11-25 12:58:39.431541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:59.743 [2024-11-25 12:58:39.431550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:59.743 [2024-11-25 12:58:39.431612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f8a0 (9): Bad file descriptor 00:23:59.743 [2024-11-25 12:58:39.431626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe10830 (9): Bad file descriptor 00:23:59.743 [2024-11-25 12:58:39.431636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126c670 (9): Bad file descriptor 00:23:59.743 [2024-11-25 12:58:39.431683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:59.743 [2024-11-25 12:58:39.431694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:59.743 [2024-11-25 12:58:39.431703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:59.743 [2024-11-25 12:58:39.432042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.743 [2024-11-25 12:58:39.432055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe12f70 with addr=10.0.0.2, port=4420 00:23:59.743 [2024-11-25 12:58:39.432063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe12f70 is same with the state(6) to be set 00:23:59.743 [2024-11-25 12:58:39.432239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.743 [2024-11-25 12:58:39.432249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12639c0 with addr=10.0.0.2, port=4420 00:23:59.743 [2024-11-25 12:58:39.432256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12639c0 is same with the state(6) to be set 00:23:59.743 [2024-11-25 12:58:39.432323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.743 [2024-11-25 12:58:39.432334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd28610 with addr=10.0.0.2, port=4420 00:23:59.743 [2024-11-25 12:58:39.432342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd28610 is same with the state(6) to be set 00:23:59.743 [2024-11-25 12:58:39.432669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.743 [2024-11-25 12:58:39.432679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe12af0 with addr=10.0.0.2, port=4420 00:23:59.743 [2024-11-25 12:58:39.432687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe12af0 is same with the state(6) to be set 00:23:59.743 [2024-11-25 12:58:39.432694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:59.743 [2024-11-25 12:58:39.432701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:59.743 [2024-11-25 12:58:39.432709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:59.743 [2024-11-25 12:58:39.432717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:59.743 [2024-11-25 12:58:39.432725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:59.743 [2024-11-25 12:58:39.432731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:59.743 [2024-11-25 12:58:39.432738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:59.743 [2024-11-25 12:58:39.432744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:59.743 [2024-11-25 12:58:39.432751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:59.743 [2024-11-25 12:58:39.432757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:59.743 [2024-11-25 12:58:39.432764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:59.743 [2024-11-25 12:58:39.432771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:59.743 [2024-11-25 12:58:39.432995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.743 [2024-11-25 12:58:39.433007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f0c0 with addr=10.0.0.2, port=4420 00:23:59.743 [2024-11-25 12:58:39.433015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f0c0 is same with the state(6) to be set 00:23:59.743 [2024-11-25 12:58:39.433216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.743 [2024-11-25 12:58:39.433226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123e910 with addr=10.0.0.2, port=4420 00:23:59.743 [2024-11-25 12:58:39.433233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123e910 is same with the state(6) to be set 00:23:59.743 [2024-11-25 12:58:39.433441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.743 [2024-11-25 12:58:39.433451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe09a60 with addr=10.0.0.2, port=4420 00:23:59.743 [2024-11-25 12:58:39.433458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe09a60 is same with the state(6) to be set 00:23:59.743 [2024-11-25 12:58:39.433468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe12f70 (9): Bad file descriptor 00:23:59.743 [2024-11-25 12:58:39.433477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12639c0 (9): Bad file descriptor 00:23:59.743 [2024-11-25 12:58:39.433486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd28610 (9): Bad file descriptor 00:23:59.743 [2024-11-25 12:58:39.433496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe12af0 (9): Bad file descriptor 00:23:59.743 [2024-11-25 12:58:39.433525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f0c0 (9): Bad file descriptor 00:23:59.743 [2024-11-25 12:58:39.433535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123e910 (9): Bad file descriptor 00:23:59.743 [2024-11-25 12:58:39.433544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe09a60 (9): Bad file descriptor 00:23:59.743 [2024-11-25 12:58:39.433553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:59.743 [2024-11-25 12:58:39.433559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:59.743 [2024-11-25 12:58:39.433566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:59.743 [2024-11-25 12:58:39.433572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:59.743 [2024-11-25 12:58:39.433580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:59.743 [2024-11-25 12:58:39.433586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:59.743 [2024-11-25 12:58:39.433593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:59.743 [2024-11-25 12:58:39.433599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:59.743 [2024-11-25 12:58:39.433607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:59.743 [2024-11-25 12:58:39.433613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:59.743 [2024-11-25 12:58:39.433620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:59.743 [2024-11-25 12:58:39.433626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:59.743 [2024-11-25 12:58:39.433633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:59.744 [2024-11-25 12:58:39.433639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:59.744 [2024-11-25 12:58:39.433646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:59.744 [2024-11-25 12:58:39.433656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:59.744 [2024-11-25 12:58:39.433682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:59.744 [2024-11-25 12:58:39.433689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:59.744 [2024-11-25 12:58:39.433696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:59.744 [2024-11-25 12:58:39.433702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:59.744 [2024-11-25 12:58:39.433709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:59.744 [2024-11-25 12:58:39.433716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:59.744 [2024-11-25 12:58:39.433723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:59.744 [2024-11-25 12:58:39.433729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:59.744 [2024-11-25 12:58:39.433736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:59.744 [2024-11-25 12:58:39.433742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:59.744 [2024-11-25 12:58:39.433750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:59.744 [2024-11-25 12:58:39.433756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:59.744 12:58:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 704860 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 704860 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 704860 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:00.774 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:00.774 rmmod nvme_tcp 00:24:01.034 rmmod nvme_fabrics 00:24:01.034 rmmod nvme_keyring 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 704556 ']' 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 704556 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 704556 ']' 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 704556 00:24:01.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (704556) - No such process 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 704556 is not found' 00:24:01.034 Process with pid 704556 is not found 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.034 12:58:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.945 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:02.945 00:24:02.945 real 0m8.221s 00:24:02.945 user 0m21.299s 00:24:02.945 sys 0m1.298s 00:24:02.945 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:02.945 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:02.945 ************************************ 00:24:02.945 END TEST nvmf_shutdown_tc3 00:24:02.945 ************************************ 00:24:02.945 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:02.945 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:02.945 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:02.945 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:02.945 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:02.945 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:03.206 ************************************ 00:24:03.206 START TEST nvmf_shutdown_tc4 00:24:03.206 ************************************ 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:03.206 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:03.206 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:03.206 Found net devices under 0000:31:00.0: cvl_0_0 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:03.206 Found net devices under 0000:31:00.1: cvl_0_1 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.206 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.207 12:58:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.207 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.207 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.207 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.207 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:24:03.467 00:24:03.467 --- 10.0.0.2 ping statistics --- 00:24:03.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.467 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:24:03.467 00:24:03.467 --- 10.0.0.1 ping statistics --- 00:24:03.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.467 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=706329 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 706329 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 706329 ']' 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.467 12:58:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:03.467 [2024-11-25 12:58:43.300767] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:24:03.467 [2024-11-25 12:58:43.300825] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.727 [2024-11-25 12:58:43.398975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:03.727 [2024-11-25 12:58:43.429133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.727 [2024-11-25 12:58:43.429160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.727 [2024-11-25 12:58:43.429166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.727 [2024-11-25 12:58:43.429170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.727 [2024-11-25 12:58:43.429175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.727 [2024-11-25 12:58:43.430361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.727 [2024-11-25 12:58:43.430518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.727 [2024-11-25 12:58:43.430667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.727 [2024-11-25 12:58:43.430669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:04.298 [2024-11-25 12:58:44.155015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:04.298 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:04.559 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:04.559 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:04.559 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:04.559 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:04.559 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:04.559 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:04.559 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:04.559 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:04.559 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:04.559 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.559 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:04.559 Malloc1 00:24:04.559 [2024-11-25 12:58:44.271981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.559 Malloc2 00:24:04.559 Malloc3 00:24:04.559 Malloc4 00:24:04.559 Malloc5 00:24:04.559 Malloc6 00:24:04.820 Malloc7 00:24:04.820 Malloc8 00:24:04.820 Malloc9 00:24:04.820 Malloc10 00:24:04.820 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.820 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:04.820 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.820 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:04.820 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=706710 00:24:04.820 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:04.820 12:58:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:05.080 [2024-11-25 12:58:44.743705] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 706329 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 706329 ']' 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 706329 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 706329 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 706329' 00:24:10.365 killing process with pid 706329 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 706329 00:24:10.365 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 706329 00:24:10.365 [2024-11-25 12:58:49.747725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035140 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.748604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035b00 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.748630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035b00 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.748835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2034c70 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.748858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2034c70 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.748871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2034c70 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.748877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2034c70 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.748887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2034c70 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.748892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2034c70 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.748897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2034c70 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.751716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20383d0 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.751741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20383d0 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.751747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20383d0 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.751753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20383d0 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.751758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20383d0 is same with the state(6) to be set 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 [2024-11-25 12:58:49.752143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388a0 is same with the state(6) to be set 00:24:10.365 starting I/O failed: -6 00:24:10.365 [2024-11-25 12:58:49.752161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388a0 is same with the state(6) to be set 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 [2024-11-25 12:58:49.752168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388a0 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.752173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388a0 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.752178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388a0 is same with the state(6) to be set 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 [2024-11-25 12:58:49.752182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388a0 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.752188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388a0 is same with the state(6) to be set 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 [2024-11-25 12:58:49.753286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 [2024-11-25 12:58:49.754372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036bc0 is same with starting I/O failed: -6 00:24:10.365 the state(6) to be set 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 [2024-11-25 12:58:49.754393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036bc0 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036bc0 is same with the state(6) to be set 00:24:10.365 starting I/O failed: -6 00:24:10.365 [2024-11-25 12:58:49.754407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036bc0 is same with the state(6) to be set 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 [2024-11-25 12:58:49.754412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036bc0 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036bc0 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036bc0 is same with the state(6) to be set 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 [2024-11-25 12:58:49.754505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.365 [2024-11-25 12:58:49.754591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037090 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037090 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037090 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037090 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037090 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037090 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037090 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037090 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037560 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037560 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037560 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037560 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037560 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037560 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037560 is same with the state(6) to be set 00:24:10.365 [2024-11-25 12:58:49.754880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037560 is same with the state(6) to be set 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.365 Write completed with error (sct=0, sc=8) 00:24:10.365 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 [2024-11-25 12:58:49.756374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:10.366 NVMe io qpair process completion error 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 [2024-11-25 12:58:49.757389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:10.366 starting I/O failed: -6 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 [2024-11-25 12:58:49.758331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 [2024-11-25 12:58:49.759232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.366 Write completed with error (sct=0, sc=8) 00:24:10.366 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 [2024-11-25 12:58:49.760832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:10.367 NVMe io qpair process completion error 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 [2024-11-25 12:58:49.762182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 [2024-11-25 12:58:49.762994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 [2024-11-25 12:58:49.763930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 [2024-11-25 12:58:49.767331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:10.367 NVMe io qpair process completion error 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 Write completed with error (sct=0, sc=8) 00:24:10.367 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 [2024-11-25 12:58:49.768513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:10.368 starting I/O failed: -6 00:24:10.368 starting I/O failed: -6 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 [2024-11-25 12:58:49.769509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 [2024-11-25 12:58:49.770433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 [2024-11-25 12:58:49.771997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:10.368 NVMe io qpair process completion error 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 [2024-11-25 12:58:49.773333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:10.368 starting I/O failed: -6 00:24:10.368 starting I/O failed: -6 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.368 starting I/O failed: -6 00:24:10.368 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 [2024-11-25 12:58:49.774347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 [2024-11-25 12:58:49.775292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 [2024-11-25 12:58:49.776728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:10.369 NVMe io qpair process completion error 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 [2024-11-25 12:58:49.777722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 [2024-11-25 12:58:49.778531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 Write completed with error (sct=0, sc=8) 00:24:10.369 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 [2024-11-25 12:58:49.779468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 [2024-11-25 12:58:49.781953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:10.370 NVMe io qpair process completion error 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 [2024-11-25 12:58:49.783075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 [2024-11-25 12:58:49.783899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.370 Write completed with error (sct=0, sc=8) 00:24:10.370 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 [2024-11-25 12:58:49.784809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 [2024-11-25 12:58:49.786464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:10.371 NVMe io qpair process completion error 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 [2024-11-25 12:58:49.787752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 [2024-11-25 12:58:49.788583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 Write completed with error (sct=0, sc=8) 00:24:10.371 starting I/O failed: -6 00:24:10.371 [2024-11-25 12:58:49.789511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 [2024-11-25 12:58:49.792403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.372 NVMe io qpair process completion error 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 [2024-11-25 12:58:49.793497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 [2024-11-25 12:58:49.794324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 [2024-11-25 12:58:49.795254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.372 Write completed with error (sct=0, sc=8) 00:24:10.372 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 [2024-11-25 12:58:49.796881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.373 NVMe io qpair process completion error 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 [2024-11-25 12:58:49.798124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 [2024-11-25 12:58:49.798980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 [2024-11-25 12:58:49.799955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.373 Write completed with error (sct=0, sc=8) 00:24:10.373 starting I/O failed: -6 00:24:10.374 Write completed with error (sct=0, sc=8) 00:24:10.374 starting I/O failed: -6 00:24:10.374 [2024-11-25 12:58:49.801627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:10.374 NVMe io qpair process completion error 00:24:10.374 Initializing NVMe Controllers 00:24:10.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:10.374 Controller IO queue size 128, less than required. 00:24:10.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:10.374 Controller IO queue size 128, less than required. 00:24:10.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:10.374 Controller IO queue size 128, less than required. 00:24:10.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:10.374 Controller IO queue size 128, less than required. 00:24:10.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:10.374 Controller IO queue size 128, less than required. 00:24:10.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:10.374 Controller IO queue size 128, less than required. 00:24:10.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:10.374 Controller IO queue size 128, less than required. 00:24:10.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:10.374 Controller IO queue size 128, less than required. 00:24:10.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:10.374 Controller IO queue size 128, less than required. 00:24:10.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:10.374 Controller IO queue size 128, less than required. 00:24:10.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:10.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:10.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:10.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:10.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:10.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:10.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:10.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:10.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:10.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:10.374 Initialization complete. Launching workers. 00:24:10.374 ======================================================== 00:24:10.374 Latency(us) 00:24:10.374 Device Information : IOPS MiB/s Average min max 00:24:10.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1901.62 81.71 67329.72 663.98 119201.72 00:24:10.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1878.53 80.72 68183.33 610.52 121051.72 00:24:10.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1888.28 81.14 67849.61 859.79 151023.79 00:24:10.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1902.89 81.76 67357.93 681.50 120508.62 00:24:10.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1864.55 80.12 68766.50 725.16 131707.80 00:24:10.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1885.74 81.03 68031.74 799.94 134658.05 00:24:10.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1910.10 82.07 67187.35 603.50 119613.41 00:24:10.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1915.39 82.30 66318.93 567.50 119302.85 00:24:10.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1860.53 79.94 68295.11 667.06 117592.90 00:24:10.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1895.27 81.44 67066.06 844.45 121583.36 00:24:10.374 ======================================================== 00:24:10.374 Total : 18902.90 812.23 67632.85 567.50 151023.79 00:24:10.374 00:24:10.374 [2024-11-25 12:58:49.806268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad060 is same with the state(6) to be set 00:24:10.374 [2024-11-25 12:58:49.806314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad6c0 is same with the state(6) to be set 00:24:10.374 [2024-11-25 12:58:49.806345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf540 is same with the state(6) to be set 00:24:10.374 [2024-11-25 12:58:49.806374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf360 is same with the state(6) to be set 00:24:10.374 [2024-11-25 12:58:49.806403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad9f0 is same with the state(6) to be set 00:24:10.374 [2024-11-25 12:58:49.806433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bae9e0 is same with the state(6) to be set 00:24:10.374 [2024-11-25 12:58:49.806462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bae050 is same with the state(6) to be set 00:24:10.374 [2024-11-25 12:58:49.806490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad390 is same with the state(6) to be set 00:24:10.374 [2024-11-25 12:58:49.806520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bae380 is same with the state(6) to be set 00:24:10.374 [2024-11-25 12:58:49.806557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bae6b0 is same with the state(6) to be set 00:24:10.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:10.374 12:58:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 706710 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 706710 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 706710 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:11.319 12:58:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:11.319 rmmod nvme_tcp 00:24:11.319 rmmod nvme_fabrics 00:24:11.319 rmmod nvme_keyring 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 706329 ']' 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 706329 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 706329 ']' 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 706329 00:24:11.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (706329) - No such process 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 706329 is not found' 00:24:11.319 Process with pid 706329 is not found 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.319 12:58:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.864 00:24:13.864 real 0m10.316s 00:24:13.864 user 0m28.012s 00:24:13.864 sys 0m4.017s 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:13.864 ************************************ 00:24:13.864 END TEST nvmf_shutdown_tc4 00:24:13.864 ************************************ 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:13.864 00:24:13.864 real 0m44.320s 00:24:13.864 user 1m46.837s 00:24:13.864 sys 0m14.265s 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:13.864 ************************************ 00:24:13.864 END TEST nvmf_shutdown 00:24:13.864 ************************************ 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:13.864 ************************************ 00:24:13.864 START TEST nvmf_nsid 00:24:13.864 ************************************ 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:13.864 * Looking for test storage... 00:24:13.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:13.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.864 --rc genhtml_branch_coverage=1 00:24:13.864 --rc genhtml_function_coverage=1 00:24:13.864 --rc genhtml_legend=1 00:24:13.864 --rc geninfo_all_blocks=1 00:24:13.864 --rc geninfo_unexecuted_blocks=1 00:24:13.864 00:24:13.864 ' 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:13.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.864 --rc genhtml_branch_coverage=1 00:24:13.864 --rc genhtml_function_coverage=1 00:24:13.864 --rc genhtml_legend=1 00:24:13.864 --rc geninfo_all_blocks=1 00:24:13.864 --rc geninfo_unexecuted_blocks=1 00:24:13.864 00:24:13.864 ' 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:13.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.864 --rc genhtml_branch_coverage=1 00:24:13.864 --rc genhtml_function_coverage=1 00:24:13.864 --rc genhtml_legend=1 00:24:13.864 --rc geninfo_all_blocks=1 00:24:13.864 --rc geninfo_unexecuted_blocks=1 00:24:13.864 00:24:13.864 ' 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:13.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.864 --rc genhtml_branch_coverage=1 00:24:13.864 --rc genhtml_function_coverage=1 00:24:13.864 --rc genhtml_legend=1 00:24:13.864 --rc geninfo_all_blocks=1 00:24:13.864 --rc geninfo_unexecuted_blocks=1 00:24:13.864 00:24:13.864 ' 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.864 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:13.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:13.865 12:58:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:22.009 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:22.009 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:22.009 Found net devices under 0000:31:00.0: cvl_0_0 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:22.009 Found net devices under 0000:31:00.1: cvl_0_1 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.009 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.010 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.270 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.270 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.270 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.270 12:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.270 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.270 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:24:22.271 00:24:22.271 --- 10.0.0.2 ping statistics --- 00:24:22.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.271 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:24:22.271 00:24:22.271 --- 10.0.0.1 ping statistics --- 00:24:22.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.271 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=712746 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 712746 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 712746 ']' 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.271 12:59:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:22.533 [2024-11-25 12:59:02.222943] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:24:22.533 [2024-11-25 12:59:02.223009] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.533 [2024-11-25 12:59:02.317179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.533 [2024-11-25 12:59:02.356483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.533 [2024-11-25 12:59:02.356522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.533 [2024-11-25 12:59:02.356531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.533 [2024-11-25 12:59:02.356537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.533 [2024-11-25 12:59:02.356543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.533 [2024-11-25 12:59:02.357203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=712778 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c0d70205-7ccb-4adb-a2e2-e8818ae1b5ec 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=8401825f-2ff6-4aac-8380-efc9c7589c9d 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:23.472 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2be5acba-50b8-4971-8a56-b81a22e3ab32 00:24:23.473 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:23.473 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.473 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:23.473 null0 00:24:23.473 null1 00:24:23.473 [2024-11-25 12:59:03.124323] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:24:23.473 [2024-11-25 12:59:03.124373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid712778 ] 00:24:23.473 null2 00:24:23.473 [2024-11-25 12:59:03.130635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.473 [2024-11-25 12:59:03.154842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.473 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.473 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 712778 /var/tmp/tgt2.sock 00:24:23.473 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 712778 ']' 00:24:23.473 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:23.473 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.473 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:23.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:23.473 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.473 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:23.473 [2024-11-25 12:59:03.220498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.473 [2024-11-25 12:59:03.256825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.764 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.764 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:23.764 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:24.023 [2024-11-25 12:59:03.737584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.023 [2024-11-25 12:59:03.753706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:24.023 nvme0n1 nvme0n2 00:24:24.023 nvme1n1 00:24:24.023 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:24.023 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:24.023 12:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:25.478 12:59:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid c0d70205-7ccb-4adb-a2e2-e8818ae1b5ec 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c0d702057ccb4adba2e2e8818ae1b5ec 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C0D702057CCB4ADBA2E2E8818AE1B5EC 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ C0D702057CCB4ADBA2E2E8818AE1B5EC == \C\0\D\7\0\2\0\5\7\C\C\B\4\A\D\B\A\2\E\2\E\8\8\1\8\A\E\1\B\5\E\C ]] 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:26.418 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 8401825f-2ff6-4aac-8380-efc9c7589c9d 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8401825f2ff64aac8380efc9c7589c9d 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8401825F2FF64AAC8380EFC9C7589C9D 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 8401825F2FF64AAC8380EFC9C7589C9D == \8\4\0\1\8\2\5\F\2\F\F\6\4\A\A\C\8\3\8\0\E\F\C\9\C\7\5\8\9\C\9\D ]] 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:26.678 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2be5acba-50b8-4971-8a56-b81a22e3ab32 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2be5acba50b849718a56b81a22e3ab32 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2BE5ACBA50B849718A56B81A22E3AB32 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2BE5ACBA50B849718A56B81A22E3AB32 == \2\B\E\5\A\C\B\A\5\0\B\8\4\9\7\1\8\A\5\6\B\8\1\A\2\2\E\3\A\B\3\2 ]] 00:24:26.679 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 712778 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 712778 ']' 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 712778 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 712778 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 712778' 00:24:26.939 killing process with pid 712778 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 712778 00:24:26.939 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 712778 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.201 rmmod nvme_tcp 00:24:27.201 rmmod nvme_fabrics 00:24:27.201 rmmod nvme_keyring 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 712746 ']' 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 712746 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 712746 ']' 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 712746 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.201 12:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 712746 00:24:27.201 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.201 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.201 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 712746' 00:24:27.201 killing process with pid 712746 00:24:27.201 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 712746 00:24:27.201 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 712746 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.462 12:59:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.374 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:29.374 00:24:29.374 real 0m15.952s 00:24:29.374 user 0m11.380s 00:24:29.374 sys 0m7.757s 00:24:29.374 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:29.374 12:59:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:29.374 ************************************ 00:24:29.374 END TEST nvmf_nsid 00:24:29.374 ************************************ 00:24:29.635 12:59:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:29.635 00:24:29.635 real 13m32.663s 00:24:29.635 user 27m49.029s 00:24:29.635 sys 4m10.543s 00:24:29.635 12:59:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:29.635 12:59:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:29.635 ************************************ 00:24:29.635 END TEST nvmf_target_extra 00:24:29.635 ************************************ 00:24:29.635 12:59:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:29.635 12:59:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:29.635 12:59:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:29.635 12:59:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:29.635 ************************************ 00:24:29.635 START TEST nvmf_host 00:24:29.635 ************************************ 00:24:29.635 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:29.635 * Looking for test storage... 00:24:29.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:29.635 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:29.635 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:29.635 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:29.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.898 --rc genhtml_branch_coverage=1 00:24:29.898 --rc genhtml_function_coverage=1 00:24:29.898 --rc genhtml_legend=1 00:24:29.898 --rc geninfo_all_blocks=1 00:24:29.898 --rc geninfo_unexecuted_blocks=1 00:24:29.898 00:24:29.898 ' 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:29.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.898 --rc genhtml_branch_coverage=1 00:24:29.898 --rc genhtml_function_coverage=1 00:24:29.898 --rc genhtml_legend=1 00:24:29.898 --rc geninfo_all_blocks=1 00:24:29.898 --rc geninfo_unexecuted_blocks=1 00:24:29.898 00:24:29.898 ' 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:29.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.898 --rc genhtml_branch_coverage=1 00:24:29.898 --rc genhtml_function_coverage=1 00:24:29.898 --rc genhtml_legend=1 00:24:29.898 --rc geninfo_all_blocks=1 00:24:29.898 --rc geninfo_unexecuted_blocks=1 00:24:29.898 00:24:29.898 ' 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:29.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.898 --rc genhtml_branch_coverage=1 00:24:29.898 --rc genhtml_function_coverage=1 00:24:29.898 --rc genhtml_legend=1 00:24:29.898 --rc geninfo_all_blocks=1 00:24:29.898 --rc geninfo_unexecuted_blocks=1 00:24:29.898 00:24:29.898 ' 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:29.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:29.898 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.899 ************************************ 00:24:29.899 START TEST nvmf_multicontroller 00:24:29.899 ************************************ 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:29.899 * Looking for test storage... 00:24:29.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:29.899 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:30.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.162 --rc genhtml_branch_coverage=1 00:24:30.162 --rc genhtml_function_coverage=1 00:24:30.162 --rc genhtml_legend=1 00:24:30.162 --rc geninfo_all_blocks=1 00:24:30.162 --rc geninfo_unexecuted_blocks=1 00:24:30.162 00:24:30.162 ' 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:30.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.162 --rc genhtml_branch_coverage=1 00:24:30.162 --rc genhtml_function_coverage=1 00:24:30.162 --rc genhtml_legend=1 00:24:30.162 --rc geninfo_all_blocks=1 00:24:30.162 --rc geninfo_unexecuted_blocks=1 00:24:30.162 00:24:30.162 ' 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:30.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.162 --rc genhtml_branch_coverage=1 00:24:30.162 --rc genhtml_function_coverage=1 00:24:30.162 --rc genhtml_legend=1 00:24:30.162 --rc geninfo_all_blocks=1 00:24:30.162 --rc geninfo_unexecuted_blocks=1 00:24:30.162 00:24:30.162 ' 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:30.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.162 --rc genhtml_branch_coverage=1 00:24:30.162 --rc genhtml_function_coverage=1 00:24:30.162 --rc genhtml_legend=1 00:24:30.162 --rc geninfo_all_blocks=1 00:24:30.162 --rc geninfo_unexecuted_blocks=1 00:24:30.162 00:24:30.162 ' 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.162 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.163 12:59:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:38.306 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:38.306 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:38.306 Found net devices under 0000:31:00.0: cvl_0_0 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:38.306 Found net devices under 0000:31:00.1: cvl_0_1 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.306 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.307 12:59:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:24:38.307 00:24:38.307 --- 10.0.0.2 ping statistics --- 00:24:38.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.307 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:24:38.307 00:24:38.307 --- 10.0.0.1 ping statistics --- 00:24:38.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.307 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=718558 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 718558 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 718558 ']' 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.307 12:59:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.568 [2024-11-25 12:59:18.234892] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:24:38.568 [2024-11-25 12:59:18.234942] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.568 [2024-11-25 12:59:18.339102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:38.568 [2024-11-25 12:59:18.374836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.568 [2024-11-25 12:59:18.374873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.568 [2024-11-25 12:59:18.374881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.568 [2024-11-25 12:59:18.374888] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.568 [2024-11-25 12:59:18.374894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.568 [2024-11-25 12:59:18.376369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.568 [2024-11-25 12:59:18.376524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.568 [2024-11-25 12:59:18.376525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.139 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.139 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:39.139 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.139 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.139 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.400 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.400 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:39.400 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.400 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.400 [2024-11-25 12:59:19.077350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.400 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.400 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.401 Malloc0 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.401 [2024-11-25 12:59:19.128898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.401 [2024-11-25 12:59:19.136796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.401 Malloc1 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=718663 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 718663 /var/tmp/bdevperf.sock 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 718663 ']' 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.401 12:59:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.342 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.342 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:40.342 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:40.342 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.342 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.342 NVMe0n1 00:24:40.342 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.342 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.342 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:40.342 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.342 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.604 1 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.604 request: 00:24:40.604 { 00:24:40.604 "name": "NVMe0", 00:24:40.604 "trtype": "tcp", 00:24:40.604 "traddr": "10.0.0.2", 00:24:40.604 "adrfam": "ipv4", 00:24:40.604 "trsvcid": "4420", 00:24:40.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.604 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:40.604 "hostaddr": "10.0.0.1", 00:24:40.604 "prchk_reftag": false, 00:24:40.604 "prchk_guard": false, 00:24:40.604 "hdgst": false, 00:24:40.604 "ddgst": false, 00:24:40.604 "allow_unrecognized_csi": false, 00:24:40.604 "method": "bdev_nvme_attach_controller", 00:24:40.604 "req_id": 1 00:24:40.604 } 00:24:40.604 Got JSON-RPC error response 00:24:40.604 response: 00:24:40.604 { 00:24:40.604 "code": -114, 00:24:40.604 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:40.604 } 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.604 request: 00:24:40.604 { 00:24:40.604 "name": "NVMe0", 00:24:40.604 "trtype": "tcp", 00:24:40.604 "traddr": "10.0.0.2", 00:24:40.604 "adrfam": "ipv4", 00:24:40.604 "trsvcid": "4420", 00:24:40.604 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:40.604 "hostaddr": "10.0.0.1", 00:24:40.604 "prchk_reftag": false, 00:24:40.604 "prchk_guard": false, 00:24:40.604 "hdgst": false, 00:24:40.604 "ddgst": false, 00:24:40.604 "allow_unrecognized_csi": false, 00:24:40.604 "method": "bdev_nvme_attach_controller", 00:24:40.604 "req_id": 1 00:24:40.604 } 00:24:40.604 Got JSON-RPC error response 00:24:40.604 response: 00:24:40.604 { 00:24:40.604 "code": -114, 00:24:40.604 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:40.604 } 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:40.604 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.605 request: 00:24:40.605 { 00:24:40.605 "name": "NVMe0", 00:24:40.605 "trtype": "tcp", 00:24:40.605 "traddr": "10.0.0.2", 00:24:40.605 "adrfam": "ipv4", 00:24:40.605 "trsvcid": "4420", 00:24:40.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.605 "hostaddr": "10.0.0.1", 00:24:40.605 "prchk_reftag": false, 00:24:40.605 "prchk_guard": false, 00:24:40.605 "hdgst": false, 00:24:40.605 "ddgst": false, 00:24:40.605 "multipath": "disable", 00:24:40.605 "allow_unrecognized_csi": false, 00:24:40.605 "method": "bdev_nvme_attach_controller", 00:24:40.605 "req_id": 1 00:24:40.605 } 00:24:40.605 Got JSON-RPC error response 00:24:40.605 response: 00:24:40.605 { 00:24:40.605 "code": -114, 00:24:40.605 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:40.605 } 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.605 request: 00:24:40.605 { 00:24:40.605 "name": "NVMe0", 00:24:40.605 "trtype": "tcp", 00:24:40.605 "traddr": "10.0.0.2", 00:24:40.605 "adrfam": "ipv4", 00:24:40.605 "trsvcid": "4420", 00:24:40.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.605 "hostaddr": "10.0.0.1", 00:24:40.605 "prchk_reftag": false, 00:24:40.605 "prchk_guard": false, 00:24:40.605 "hdgst": false, 00:24:40.605 "ddgst": false, 00:24:40.605 "multipath": "failover", 00:24:40.605 "allow_unrecognized_csi": false, 00:24:40.605 "method": "bdev_nvme_attach_controller", 00:24:40.605 "req_id": 1 00:24:40.605 } 00:24:40.605 Got JSON-RPC error response 00:24:40.605 response: 00:24:40.605 { 00:24:40.605 "code": -114, 00:24:40.605 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:40.605 } 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.605 NVMe0n1 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.605 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.866 00:24:40.866 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.866 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.866 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:40.866 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.866 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.866 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.866 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:40.866 12:59:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:42.285 { 00:24:42.285 "results": [ 00:24:42.285 { 00:24:42.285 "job": "NVMe0n1", 00:24:42.285 "core_mask": "0x1", 00:24:42.285 "workload": "write", 00:24:42.285 "status": "finished", 00:24:42.285 "queue_depth": 128, 00:24:42.285 "io_size": 4096, 00:24:42.285 "runtime": 1.008133, 00:24:42.285 "iops": 28471.441764132313, 00:24:42.285 "mibps": 111.21656939114185, 00:24:42.285 "io_failed": 0, 00:24:42.285 "io_timeout": 0, 00:24:42.285 "avg_latency_us": 4487.421321348523, 00:24:42.285 "min_latency_us": 2116.266666666667, 00:24:42.285 "max_latency_us": 11905.706666666667 00:24:42.285 } 00:24:42.285 ], 00:24:42.285 "core_count": 1 00:24:42.285 } 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 718663 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 718663 ']' 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 718663 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 718663 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.285 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 718663' 00:24:42.285 killing process with pid 718663 00:24:42.286 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 718663 00:24:42.286 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 718663 00:24:42.286 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.286 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.286 12:59:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:42.286 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:42.286 [2024-11-25 12:59:19.247708] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:24:42.286 [2024-11-25 12:59:19.247778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718663 ] 00:24:42.286 [2024-11-25 12:59:19.326285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.286 [2024-11-25 12:59:19.362578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.286 [2024-11-25 12:59:20.650422] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name ea938059-0c62-442a-9e0d-902123243284 already exists 00:24:42.286 [2024-11-25 12:59:20.650451] bdev.c:7832:bdev_register: *ERROR*: Unable to add uuid:ea938059-0c62-442a-9e0d-902123243284 alias for bdev NVMe1n1 00:24:42.286 [2024-11-25 12:59:20.650460] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:42.286 Running I/O for 1 seconds... 00:24:42.286 28463.00 IOPS, 111.18 MiB/s 00:24:42.286 Latency(us) 00:24:42.286 [2024-11-25T11:59:22.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.286 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:42.286 NVMe0n1 : 1.01 28471.44 111.22 0.00 0.00 4487.42 2116.27 11905.71 00:24:42.286 [2024-11-25T11:59:22.189Z] =================================================================================================================== 00:24:42.286 [2024-11-25T11:59:22.189Z] Total : 28471.44 111.22 0.00 0.00 4487.42 2116.27 11905.71 00:24:42.286 Received shutdown signal, test time was about 1.000000 seconds 00:24:42.286 00:24:42.286 Latency(us) 00:24:42.286 [2024-11-25T11:59:22.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.286 [2024-11-25T11:59:22.189Z] =================================================================================================================== 00:24:42.286 [2024-11-25T11:59:22.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.286 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:42.286 rmmod nvme_tcp 00:24:42.286 rmmod nvme_fabrics 00:24:42.286 rmmod nvme_keyring 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 718558 ']' 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 718558 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 718558 ']' 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 718558 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 718558 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 718558' 00:24:42.286 killing process with pid 718558 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 718558 00:24:42.286 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 718558 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.546 12:59:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.087 00:24:45.087 real 0m14.742s 00:24:45.087 user 0m17.295s 00:24:45.087 sys 0m7.070s 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:45.087 ************************************ 00:24:45.087 END TEST nvmf_multicontroller 00:24:45.087 ************************************ 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.087 ************************************ 00:24:45.087 START TEST nvmf_aer 00:24:45.087 ************************************ 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:45.087 * Looking for test storage... 00:24:45.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:45.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.087 --rc genhtml_branch_coverage=1 00:24:45.087 --rc genhtml_function_coverage=1 00:24:45.087 --rc genhtml_legend=1 00:24:45.087 --rc geninfo_all_blocks=1 00:24:45.087 --rc geninfo_unexecuted_blocks=1 00:24:45.087 00:24:45.087 ' 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:45.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.087 --rc genhtml_branch_coverage=1 00:24:45.087 --rc genhtml_function_coverage=1 00:24:45.087 --rc genhtml_legend=1 00:24:45.087 --rc geninfo_all_blocks=1 00:24:45.087 --rc geninfo_unexecuted_blocks=1 00:24:45.087 00:24:45.087 ' 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:45.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.087 --rc genhtml_branch_coverage=1 00:24:45.087 --rc genhtml_function_coverage=1 00:24:45.087 --rc genhtml_legend=1 00:24:45.087 --rc geninfo_all_blocks=1 00:24:45.087 --rc geninfo_unexecuted_blocks=1 00:24:45.087 00:24:45.087 ' 00:24:45.087 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:45.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.088 --rc genhtml_branch_coverage=1 00:24:45.088 --rc genhtml_function_coverage=1 00:24:45.088 --rc genhtml_legend=1 00:24:45.088 --rc geninfo_all_blocks=1 00:24:45.088 --rc geninfo_unexecuted_blocks=1 00:24:45.088 00:24:45.088 ' 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.088 12:59:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:53.227 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:53.227 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:53.227 Found net devices under 0000:31:00.0: cvl_0_0 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:53.227 Found net devices under 0000:31:00.1: cvl_0_1 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:53.227 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:24:53.228 00:24:53.228 --- 10.0.0.2 ping statistics --- 00:24:53.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.228 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:24:53.228 00:24:53.228 --- 10.0.0.1 ping statistics --- 00:24:53.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.228 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=723958 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 723958 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 723958 ']' 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.228 12:59:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:53.228 [2024-11-25 12:59:33.024365] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:24:53.228 [2024-11-25 12:59:33.024414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.228 [2024-11-25 12:59:33.112650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:53.488 [2024-11-25 12:59:33.148660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.488 [2024-11-25 12:59:33.148695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.488 [2024-11-25 12:59:33.148703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.488 [2024-11-25 12:59:33.148709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.488 [2024-11-25 12:59:33.148715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.488 [2024-11-25 12:59:33.150366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.488 [2024-11-25 12:59:33.150453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.488 [2024-11-25 12:59:33.150607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.488 [2024-11-25 12:59:33.150607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.059 [2024-11-25 12:59:33.864330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.059 Malloc0 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:54.059 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.060 [2024-11-25 12:59:33.930132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.060 [ 00:24:54.060 { 00:24:54.060 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:54.060 "subtype": "Discovery", 00:24:54.060 "listen_addresses": [], 00:24:54.060 "allow_any_host": true, 00:24:54.060 "hosts": [] 00:24:54.060 }, 00:24:54.060 { 00:24:54.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.060 "subtype": "NVMe", 00:24:54.060 "listen_addresses": [ 00:24:54.060 { 00:24:54.060 "trtype": "TCP", 00:24:54.060 "adrfam": "IPv4", 00:24:54.060 "traddr": "10.0.0.2", 00:24:54.060 "trsvcid": "4420" 00:24:54.060 } 00:24:54.060 ], 00:24:54.060 "allow_any_host": true, 00:24:54.060 "hosts": [], 00:24:54.060 "serial_number": "SPDK00000000000001", 00:24:54.060 "model_number": "SPDK bdev Controller", 00:24:54.060 "max_namespaces": 2, 00:24:54.060 "min_cntlid": 1, 00:24:54.060 "max_cntlid": 65519, 00:24:54.060 "namespaces": [ 00:24:54.060 { 00:24:54.060 "nsid": 1, 00:24:54.060 "bdev_name": "Malloc0", 00:24:54.060 "name": "Malloc0", 00:24:54.060 "nguid": "970C2906596E4E4984EBF6197253F73D", 00:24:54.060 "uuid": "970c2906-596e-4e49-84eb-f6197253f73d" 00:24:54.060 } 00:24:54.060 ] 00:24:54.060 } 00:24:54.060 ] 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=724306 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:54.060 12:59:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:54.321 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.321 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:54.321 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:54.321 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:54.321 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.321 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:24:54.321 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:24:54.321 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.582 Malloc1 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.582 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.582 Asynchronous Event Request test 00:24:54.582 Attaching to 10.0.0.2 00:24:54.582 Attached to 10.0.0.2 00:24:54.582 Registering asynchronous event callbacks... 00:24:54.582 Starting namespace attribute notice tests for all controllers... 00:24:54.582 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:54.583 aer_cb - Changed Namespace 00:24:54.583 Cleaning up... 00:24:54.583 [ 00:24:54.583 { 00:24:54.583 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:54.583 "subtype": "Discovery", 00:24:54.583 "listen_addresses": [], 00:24:54.583 "allow_any_host": true, 00:24:54.583 "hosts": [] 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.583 "subtype": "NVMe", 00:24:54.583 "listen_addresses": [ 00:24:54.583 { 00:24:54.583 "trtype": "TCP", 00:24:54.583 "adrfam": "IPv4", 00:24:54.583 "traddr": "10.0.0.2", 00:24:54.583 "trsvcid": "4420" 00:24:54.583 } 00:24:54.583 ], 00:24:54.583 "allow_any_host": true, 00:24:54.583 "hosts": [], 00:24:54.583 "serial_number": "SPDK00000000000001", 00:24:54.583 "model_number": "SPDK bdev Controller", 00:24:54.583 "max_namespaces": 2, 00:24:54.583 "min_cntlid": 1, 00:24:54.583 "max_cntlid": 65519, 00:24:54.583 "namespaces": [ 00:24:54.583 { 00:24:54.583 "nsid": 1, 00:24:54.583 "bdev_name": "Malloc0", 00:24:54.583 "name": "Malloc0", 00:24:54.583 "nguid": "970C2906596E4E4984EBF6197253F73D", 00:24:54.583 "uuid": "970c2906-596e-4e49-84eb-f6197253f73d" 00:24:54.583 }, 00:24:54.583 { 00:24:54.583 "nsid": 2, 00:24:54.583 "bdev_name": "Malloc1", 00:24:54.583 "name": "Malloc1", 00:24:54.583 "nguid": "929383BB15BB4B0CB5932A295812CC80", 00:24:54.583 "uuid": "929383bb-15bb-4b0c-b593-2a295812cc80" 00:24:54.583 } 00:24:54.583 ] 00:24:54.583 } 00:24:54.583 ] 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 724306 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.583 rmmod nvme_tcp 00:24:54.583 rmmod nvme_fabrics 00:24:54.583 rmmod nvme_keyring 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 723958 ']' 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 723958 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 723958 ']' 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 723958 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.583 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 723958 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 723958' 00:24:54.844 killing process with pid 723958 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 723958 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 723958 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.844 12:59:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.392 00:24:57.392 real 0m12.253s 00:24:57.392 user 0m8.348s 00:24:57.392 sys 0m6.706s 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:57.392 ************************************ 00:24:57.392 END TEST nvmf_aer 00:24:57.392 ************************************ 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.392 ************************************ 00:24:57.392 START TEST nvmf_async_init 00:24:57.392 ************************************ 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:57.392 * Looking for test storage... 00:24:57.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:57.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.392 --rc genhtml_branch_coverage=1 00:24:57.392 --rc genhtml_function_coverage=1 00:24:57.392 --rc genhtml_legend=1 00:24:57.392 --rc geninfo_all_blocks=1 00:24:57.392 --rc geninfo_unexecuted_blocks=1 00:24:57.392 00:24:57.392 ' 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:57.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.392 --rc genhtml_branch_coverage=1 00:24:57.392 --rc genhtml_function_coverage=1 00:24:57.392 --rc genhtml_legend=1 00:24:57.392 --rc geninfo_all_blocks=1 00:24:57.392 --rc geninfo_unexecuted_blocks=1 00:24:57.392 00:24:57.392 ' 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:57.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.392 --rc genhtml_branch_coverage=1 00:24:57.392 --rc genhtml_function_coverage=1 00:24:57.392 --rc genhtml_legend=1 00:24:57.392 --rc geninfo_all_blocks=1 00:24:57.392 --rc geninfo_unexecuted_blocks=1 00:24:57.392 00:24:57.392 ' 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:57.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.392 --rc genhtml_branch_coverage=1 00:24:57.392 --rc genhtml_function_coverage=1 00:24:57.392 --rc genhtml_legend=1 00:24:57.392 --rc geninfo_all_blocks=1 00:24:57.392 --rc geninfo_unexecuted_blocks=1 00:24:57.392 00:24:57.392 ' 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.392 12:59:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.392 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b739a215d67b4cb5856362887f372f1f 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.393 12:59:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:05.538 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:05.538 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:05.538 Found net devices under 0000:31:00.0: cvl_0_0 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:05.538 Found net devices under 0000:31:00.1: cvl_0_1 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.538 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.539 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.539 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:25:05.801 00:25:05.801 --- 10.0.0.2 ping statistics --- 00:25:05.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.801 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:25:05.801 00:25:05.801 --- 10.0.0.1 ping statistics --- 00:25:05.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.801 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=728996 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 728996 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 728996 ']' 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.801 12:59:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.801 [2024-11-25 12:59:45.617278] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:25:05.801 [2024-11-25 12:59:45.617343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.064 [2024-11-25 12:59:45.710512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.064 [2024-11-25 12:59:45.750336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.064 [2024-11-25 12:59:45.750375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.064 [2024-11-25 12:59:45.750384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.064 [2024-11-25 12:59:45.750391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.064 [2024-11-25 12:59:45.750397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.064 [2024-11-25 12:59:45.751029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.733 [2024-11-25 12:59:46.448178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.733 null0 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.733 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b739a215d67b4cb5856362887f372f1f 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.734 [2024-11-25 12:59:46.488436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.734 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.035 nvme0n1 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.035 [ 00:25:07.035 { 00:25:07.035 "name": "nvme0n1", 00:25:07.035 "aliases": [ 00:25:07.035 "b739a215-d67b-4cb5-8563-62887f372f1f" 00:25:07.035 ], 00:25:07.035 "product_name": "NVMe disk", 00:25:07.035 "block_size": 512, 00:25:07.035 "num_blocks": 2097152, 00:25:07.035 "uuid": "b739a215-d67b-4cb5-8563-62887f372f1f", 00:25:07.035 "numa_id": 0, 00:25:07.035 "assigned_rate_limits": { 00:25:07.035 "rw_ios_per_sec": 0, 00:25:07.035 "rw_mbytes_per_sec": 0, 00:25:07.035 "r_mbytes_per_sec": 0, 00:25:07.035 "w_mbytes_per_sec": 0 00:25:07.035 }, 00:25:07.035 "claimed": false, 00:25:07.035 "zoned": false, 00:25:07.035 "supported_io_types": { 00:25:07.035 "read": true, 00:25:07.035 "write": true, 00:25:07.035 "unmap": false, 00:25:07.035 "flush": true, 00:25:07.035 "reset": true, 00:25:07.035 "nvme_admin": true, 00:25:07.035 "nvme_io": true, 00:25:07.035 "nvme_io_md": false, 00:25:07.035 "write_zeroes": true, 00:25:07.035 "zcopy": false, 00:25:07.035 "get_zone_info": false, 00:25:07.035 "zone_management": false, 00:25:07.035 "zone_append": false, 00:25:07.035 "compare": true, 00:25:07.035 "compare_and_write": true, 00:25:07.035 "abort": true, 00:25:07.035 "seek_hole": false, 00:25:07.035 "seek_data": false, 00:25:07.035 "copy": true, 00:25:07.035 "nvme_iov_md": false 00:25:07.035 }, 00:25:07.035 "memory_domains": [ 00:25:07.035 { 00:25:07.035 "dma_device_id": "system", 00:25:07.035 "dma_device_type": 1 00:25:07.035 } 00:25:07.035 ], 00:25:07.035 "driver_specific": { 00:25:07.035 "nvme": [ 00:25:07.035 { 00:25:07.035 "trid": { 00:25:07.035 "trtype": "TCP", 00:25:07.035 "adrfam": "IPv4", 00:25:07.035 "traddr": "10.0.0.2", 00:25:07.035 "trsvcid": "4420", 00:25:07.035 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:07.035 }, 00:25:07.035 "ctrlr_data": { 00:25:07.035 "cntlid": 1, 00:25:07.035 "vendor_id": "0x8086", 00:25:07.035 "model_number": "SPDK bdev Controller", 00:25:07.035 "serial_number": "00000000000000000000", 00:25:07.035 "firmware_revision": "25.01", 00:25:07.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:07.035 "oacs": { 00:25:07.035 "security": 0, 00:25:07.035 "format": 0, 00:25:07.035 "firmware": 0, 00:25:07.035 "ns_manage": 0 00:25:07.035 }, 00:25:07.035 "multi_ctrlr": true, 00:25:07.035 "ana_reporting": false 00:25:07.035 }, 00:25:07.035 "vs": { 00:25:07.035 "nvme_version": "1.3" 00:25:07.035 }, 00:25:07.035 "ns_data": { 00:25:07.035 "id": 1, 00:25:07.035 "can_share": true 00:25:07.035 } 00:25:07.035 } 00:25:07.035 ], 00:25:07.035 "mp_policy": "active_passive" 00:25:07.035 } 00:25:07.035 } 00:25:07.035 ] 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.035 [2024-11-25 12:59:46.745588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:07.035 [2024-11-25 12:59:46.745651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d0420 (9): Bad file descriptor 00:25:07.035 [2024-11-25 12:59:46.877966] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.035 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.035 [ 00:25:07.035 { 00:25:07.035 "name": "nvme0n1", 00:25:07.035 "aliases": [ 00:25:07.035 "b739a215-d67b-4cb5-8563-62887f372f1f" 00:25:07.035 ], 00:25:07.035 "product_name": "NVMe disk", 00:25:07.035 "block_size": 512, 00:25:07.035 "num_blocks": 2097152, 00:25:07.035 "uuid": "b739a215-d67b-4cb5-8563-62887f372f1f", 00:25:07.035 "numa_id": 0, 00:25:07.035 "assigned_rate_limits": { 00:25:07.035 "rw_ios_per_sec": 0, 00:25:07.035 "rw_mbytes_per_sec": 0, 00:25:07.035 "r_mbytes_per_sec": 0, 00:25:07.035 "w_mbytes_per_sec": 0 00:25:07.035 }, 00:25:07.035 "claimed": false, 00:25:07.035 "zoned": false, 00:25:07.035 "supported_io_types": { 00:25:07.035 "read": true, 00:25:07.035 "write": true, 00:25:07.035 "unmap": false, 00:25:07.035 "flush": true, 00:25:07.035 "reset": true, 00:25:07.036 "nvme_admin": true, 00:25:07.036 "nvme_io": true, 00:25:07.036 "nvme_io_md": false, 00:25:07.036 "write_zeroes": true, 00:25:07.036 "zcopy": false, 00:25:07.036 "get_zone_info": false, 00:25:07.036 "zone_management": false, 00:25:07.036 "zone_append": false, 00:25:07.036 "compare": true, 00:25:07.036 "compare_and_write": true, 00:25:07.036 "abort": true, 00:25:07.036 "seek_hole": false, 00:25:07.036 "seek_data": false, 00:25:07.036 "copy": true, 00:25:07.036 "nvme_iov_md": false 00:25:07.036 }, 00:25:07.036 "memory_domains": [ 00:25:07.036 { 00:25:07.036 "dma_device_id": "system", 00:25:07.036 "dma_device_type": 1 00:25:07.036 } 00:25:07.036 ], 00:25:07.036 "driver_specific": { 00:25:07.036 "nvme": [ 00:25:07.036 { 00:25:07.036 "trid": { 00:25:07.036 "trtype": "TCP", 00:25:07.036 "adrfam": "IPv4", 00:25:07.036 "traddr": "10.0.0.2", 00:25:07.036 "trsvcid": "4420", 00:25:07.036 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:07.036 }, 00:25:07.036 "ctrlr_data": { 00:25:07.036 "cntlid": 2, 00:25:07.036 "vendor_id": "0x8086", 00:25:07.036 "model_number": "SPDK bdev Controller", 00:25:07.036 "serial_number": "00000000000000000000", 00:25:07.036 "firmware_revision": "25.01", 00:25:07.036 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:07.036 "oacs": { 00:25:07.036 "security": 0, 00:25:07.036 "format": 0, 00:25:07.036 "firmware": 0, 00:25:07.036 "ns_manage": 0 00:25:07.036 }, 00:25:07.036 "multi_ctrlr": true, 00:25:07.036 "ana_reporting": false 00:25:07.036 }, 00:25:07.036 "vs": { 00:25:07.036 "nvme_version": "1.3" 00:25:07.036 }, 00:25:07.036 "ns_data": { 00:25:07.036 "id": 1, 00:25:07.036 "can_share": true 00:25:07.036 } 00:25:07.036 } 00:25:07.036 ], 00:25:07.036 "mp_policy": "active_passive" 00:25:07.036 } 00:25:07.036 } 00:25:07.036 ] 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.dV7NGoL1cG 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.dV7NGoL1cG 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.dV7NGoL1cG 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.036 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.297 [2024-11-25 12:59:46.950225] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:07.297 [2024-11-25 12:59:46.950342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.297 12:59:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.297 [2024-11-25 12:59:46.966287] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:07.297 nvme0n1 00:25:07.297 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.297 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:07.297 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.297 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.297 [ 00:25:07.297 { 00:25:07.297 "name": "nvme0n1", 00:25:07.297 "aliases": [ 00:25:07.297 "b739a215-d67b-4cb5-8563-62887f372f1f" 00:25:07.297 ], 00:25:07.297 "product_name": "NVMe disk", 00:25:07.297 "block_size": 512, 00:25:07.297 "num_blocks": 2097152, 00:25:07.297 "uuid": "b739a215-d67b-4cb5-8563-62887f372f1f", 00:25:07.297 "numa_id": 0, 00:25:07.297 "assigned_rate_limits": { 00:25:07.297 "rw_ios_per_sec": 0, 00:25:07.297 "rw_mbytes_per_sec": 0, 00:25:07.297 "r_mbytes_per_sec": 0, 00:25:07.297 "w_mbytes_per_sec": 0 00:25:07.297 }, 00:25:07.297 "claimed": false, 00:25:07.297 "zoned": false, 00:25:07.297 "supported_io_types": { 00:25:07.297 "read": true, 00:25:07.297 "write": true, 00:25:07.297 "unmap": false, 00:25:07.297 "flush": true, 00:25:07.297 "reset": true, 00:25:07.297 "nvme_admin": true, 00:25:07.297 "nvme_io": true, 00:25:07.297 "nvme_io_md": false, 00:25:07.297 "write_zeroes": true, 00:25:07.297 "zcopy": false, 00:25:07.297 "get_zone_info": false, 00:25:07.297 "zone_management": false, 00:25:07.297 "zone_append": false, 00:25:07.297 "compare": true, 00:25:07.297 "compare_and_write": true, 00:25:07.297 "abort": true, 00:25:07.297 "seek_hole": false, 00:25:07.297 "seek_data": false, 00:25:07.297 "copy": true, 00:25:07.297 "nvme_iov_md": false 00:25:07.297 }, 00:25:07.297 "memory_domains": [ 00:25:07.297 { 00:25:07.297 "dma_device_id": "system", 00:25:07.297 "dma_device_type": 1 00:25:07.297 } 00:25:07.297 ], 00:25:07.297 "driver_specific": { 00:25:07.297 "nvme": [ 00:25:07.297 { 00:25:07.297 "trid": { 00:25:07.297 "trtype": "TCP", 00:25:07.297 "adrfam": "IPv4", 00:25:07.297 "traddr": "10.0.0.2", 00:25:07.297 "trsvcid": "4421", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:07.297 }, 00:25:07.297 "ctrlr_data": { 00:25:07.297 "cntlid": 3, 00:25:07.297 "vendor_id": "0x8086", 00:25:07.297 "model_number": "SPDK bdev Controller", 00:25:07.297 "serial_number": "00000000000000000000", 00:25:07.297 "firmware_revision": "25.01", 00:25:07.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:07.297 "oacs": { 00:25:07.297 "security": 0, 00:25:07.297 "format": 0, 00:25:07.297 "firmware": 0, 00:25:07.297 "ns_manage": 0 00:25:07.297 }, 00:25:07.297 "multi_ctrlr": true, 00:25:07.297 "ana_reporting": false 00:25:07.297 }, 00:25:07.297 "vs": { 00:25:07.297 "nvme_version": "1.3" 00:25:07.297 }, 00:25:07.297 "ns_data": { 00:25:07.297 "id": 1, 00:25:07.297 "can_share": true 00:25:07.297 } 00:25:07.297 } 00:25:07.297 ], 00:25:07.297 "mp_policy": "active_passive" 00:25:07.297 } 00:25:07.297 } 00:25:07.297 ] 00:25:07.297 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.297 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.297 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.297 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.297 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.297 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.dV7NGoL1cG 00:25:07.297 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:07.298 rmmod nvme_tcp 00:25:07.298 rmmod nvme_fabrics 00:25:07.298 rmmod nvme_keyring 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 728996 ']' 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 728996 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 728996 ']' 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 728996 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.298 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 728996 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 728996' 00:25:07.558 killing process with pid 728996 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 728996 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 728996 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.558 12:59:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:10.104 00:25:10.104 real 0m12.612s 00:25:10.104 user 0m4.419s 00:25:10.104 sys 0m6.732s 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.104 ************************************ 00:25:10.104 END TEST nvmf_async_init 00:25:10.104 ************************************ 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.104 ************************************ 00:25:10.104 START TEST dma 00:25:10.104 ************************************ 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:10.104 * Looking for test storage... 00:25:10.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.104 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:10.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.104 --rc genhtml_branch_coverage=1 00:25:10.104 --rc genhtml_function_coverage=1 00:25:10.104 --rc genhtml_legend=1 00:25:10.104 --rc geninfo_all_blocks=1 00:25:10.105 --rc geninfo_unexecuted_blocks=1 00:25:10.105 00:25:10.105 ' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:10.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.105 --rc genhtml_branch_coverage=1 00:25:10.105 --rc genhtml_function_coverage=1 00:25:10.105 --rc genhtml_legend=1 00:25:10.105 --rc geninfo_all_blocks=1 00:25:10.105 --rc geninfo_unexecuted_blocks=1 00:25:10.105 00:25:10.105 ' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:10.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.105 --rc genhtml_branch_coverage=1 00:25:10.105 --rc genhtml_function_coverage=1 00:25:10.105 --rc genhtml_legend=1 00:25:10.105 --rc geninfo_all_blocks=1 00:25:10.105 --rc geninfo_unexecuted_blocks=1 00:25:10.105 00:25:10.105 ' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:10.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.105 --rc genhtml_branch_coverage=1 00:25:10.105 --rc genhtml_function_coverage=1 00:25:10.105 --rc genhtml_legend=1 00:25:10.105 --rc geninfo_all_blocks=1 00:25:10.105 --rc geninfo_unexecuted_blocks=1 00:25:10.105 00:25:10.105 ' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:10.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:10.105 00:25:10.105 real 0m0.235s 00:25:10.105 user 0m0.150s 00:25:10.105 sys 0m0.101s 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:10.105 ************************************ 00:25:10.105 END TEST dma 00:25:10.105 ************************************ 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.105 ************************************ 00:25:10.105 START TEST nvmf_identify 00:25:10.105 ************************************ 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:10.105 * Looking for test storage... 00:25:10.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.105 12:59:49 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:10.105 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.105 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.105 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.105 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:10.105 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.105 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.106 --rc genhtml_branch_coverage=1 00:25:10.106 --rc genhtml_function_coverage=1 00:25:10.106 --rc genhtml_legend=1 00:25:10.106 --rc geninfo_all_blocks=1 00:25:10.106 --rc geninfo_unexecuted_blocks=1 00:25:10.106 00:25:10.106 ' 00:25:10.106 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.106 --rc genhtml_branch_coverage=1 00:25:10.106 --rc genhtml_function_coverage=1 00:25:10.106 --rc genhtml_legend=1 00:25:10.106 --rc geninfo_all_blocks=1 00:25:10.106 --rc geninfo_unexecuted_blocks=1 00:25:10.106 00:25:10.106 ' 00:25:10.106 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.106 --rc genhtml_branch_coverage=1 00:25:10.106 --rc genhtml_function_coverage=1 00:25:10.106 --rc genhtml_legend=1 00:25:10.106 --rc geninfo_all_blocks=1 00:25:10.106 --rc geninfo_unexecuted_blocks=1 00:25:10.106 00:25:10.106 ' 00:25:10.106 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.106 --rc genhtml_branch_coverage=1 00:25:10.106 --rc genhtml_function_coverage=1 00:25:10.106 --rc genhtml_legend=1 00:25:10.106 --rc geninfo_all_blocks=1 00:25:10.106 --rc geninfo_unexecuted_blocks=1 00:25:10.106 00:25:10.106 ' 00:25:10.106 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:10.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.368 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.369 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:10.369 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:10.369 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:10.369 12:59:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.511 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:18.512 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:18.512 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:18.512 Found net devices under 0000:31:00.0: cvl_0_0 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:18.512 Found net devices under 0000:31:00.1: cvl_0_1 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:18.512 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.773 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.773 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.773 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:18.773 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:18.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:25:18.773 00:25:18.773 --- 10.0.0.2 ping statistics --- 00:25:18.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.773 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:25:18.773 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:25:18.773 00:25:18.773 --- 10.0.0.1 ping statistics --- 00:25:18.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.773 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:25:18.773 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.773 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:18.773 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:18.773 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=734109 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 734109 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 734109 ']' 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.774 12:59:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.774 [2024-11-25 12:59:58.577464] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:25:18.774 [2024-11-25 12:59:58.577527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.774 [2024-11-25 12:59:58.672343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:19.034 [2024-11-25 12:59:58.715056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.034 [2024-11-25 12:59:58.715094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.034 [2024-11-25 12:59:58.715102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.034 [2024-11-25 12:59:58.715108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.034 [2024-11-25 12:59:58.715114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.034 [2024-11-25 12:59:58.716896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.034 [2024-11-25 12:59:58.717012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.034 [2024-11-25 12:59:58.717168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.034 [2024-11-25 12:59:58.717168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.605 [2024-11-25 12:59:59.391860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.605 Malloc0 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.605 [2024-11-25 12:59:59.500102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.605 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.869 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.869 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:19.869 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.869 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:19.869 [ 00:25:19.869 { 00:25:19.869 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:19.869 "subtype": "Discovery", 00:25:19.869 "listen_addresses": [ 00:25:19.869 { 00:25:19.869 "trtype": "TCP", 00:25:19.869 "adrfam": "IPv4", 00:25:19.869 "traddr": "10.0.0.2", 00:25:19.869 "trsvcid": "4420" 00:25:19.869 } 00:25:19.869 ], 00:25:19.869 "allow_any_host": true, 00:25:19.869 "hosts": [] 00:25:19.869 }, 00:25:19.869 { 00:25:19.869 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.869 "subtype": "NVMe", 00:25:19.869 "listen_addresses": [ 00:25:19.869 { 00:25:19.869 "trtype": "TCP", 00:25:19.869 "adrfam": "IPv4", 00:25:19.869 "traddr": "10.0.0.2", 00:25:19.869 "trsvcid": "4420" 00:25:19.869 } 00:25:19.869 ], 00:25:19.869 "allow_any_host": true, 00:25:19.869 "hosts": [], 00:25:19.869 "serial_number": "SPDK00000000000001", 00:25:19.869 "model_number": "SPDK bdev Controller", 00:25:19.869 "max_namespaces": 32, 00:25:19.869 "min_cntlid": 1, 00:25:19.869 "max_cntlid": 65519, 00:25:19.869 "namespaces": [ 00:25:19.869 { 00:25:19.869 "nsid": 1, 00:25:19.869 "bdev_name": "Malloc0", 00:25:19.869 "name": "Malloc0", 00:25:19.869 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:19.869 "eui64": "ABCDEF0123456789", 00:25:19.869 "uuid": "0ed13057-b3a3-45d4-be84-ab3fed2253da" 00:25:19.869 } 00:25:19.869 ] 00:25:19.869 } 00:25:19.869 ] 00:25:19.869 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.869 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:19.869 [2024-11-25 12:59:59.564128] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:25:19.869 [2024-11-25 12:59:59.564178] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734439 ] 00:25:19.869 [2024-11-25 12:59:59.620059] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:19.869 [2024-11-25 12:59:59.620112] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:19.869 [2024-11-25 12:59:59.620118] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:19.869 [2024-11-25 12:59:59.620129] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:19.869 [2024-11-25 12:59:59.620140] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:19.869 [2024-11-25 12:59:59.620827] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:19.869 [2024-11-25 12:59:59.620859] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13e8550 0 00:25:19.869 [2024-11-25 12:59:59.626882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:19.869 [2024-11-25 12:59:59.626895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:19.869 [2024-11-25 12:59:59.626900] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:19.869 [2024-11-25 12:59:59.626907] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:19.869 [2024-11-25 12:59:59.626941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.869 [2024-11-25 12:59:59.626946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.869 [2024-11-25 12:59:59.626950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e8550) 00:25:19.869 [2024-11-25 12:59:59.626963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:19.869 [2024-11-25 12:59:59.626981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a100, cid 0, qid 0 00:25:19.869 [2024-11-25 12:59:59.634873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.869 [2024-11-25 12:59:59.634882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.869 [2024-11-25 12:59:59.634886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.869 [2024-11-25 12:59:59.634891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a100) on tqpair=0x13e8550 00:25:19.869 [2024-11-25 12:59:59.634903] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:19.869 [2024-11-25 12:59:59.634909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:19.869 [2024-11-25 12:59:59.634915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:19.869 [2024-11-25 12:59:59.634927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.869 [2024-11-25 12:59:59.634932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.869 [2024-11-25 12:59:59.634935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e8550) 00:25:19.869 [2024-11-25 12:59:59.634943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.869 [2024-11-25 12:59:59.634956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a100, cid 0, qid 0 00:25:19.869 [2024-11-25 12:59:59.635138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.869 [2024-11-25 12:59:59.635145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.869 [2024-11-25 12:59:59.635148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.869 [2024-11-25 12:59:59.635152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a100) on tqpair=0x13e8550 00:25:19.869 [2024-11-25 12:59:59.635157] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:19.869 [2024-11-25 12:59:59.635165] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:19.869 [2024-11-25 12:59:59.635171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.869 [2024-11-25 12:59:59.635175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.869 [2024-11-25 12:59:59.635179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e8550) 00:25:19.869 [2024-11-25 12:59:59.635186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.869 [2024-11-25 12:59:59.635196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a100, cid 0, qid 0 00:25:19.869 [2024-11-25 12:59:59.635365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.869 [2024-11-25 12:59:59.635371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.869 [2024-11-25 12:59:59.635374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.869 [2024-11-25 12:59:59.635378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a100) on tqpair=0x13e8550 00:25:19.869 [2024-11-25 12:59:59.635383] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:19.869 [2024-11-25 12:59:59.635391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:19.869 [2024-11-25 12:59:59.635401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.869 [2024-11-25 12:59:59.635404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.635408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e8550) 00:25:19.870 [2024-11-25 12:59:59.635415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.870 [2024-11-25 12:59:59.635425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a100, cid 0, qid 0 00:25:19.870 [2024-11-25 12:59:59.635596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.870 [2024-11-25 12:59:59.635603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.870 [2024-11-25 12:59:59.635606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.635610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a100) on tqpair=0x13e8550 00:25:19.870 [2024-11-25 12:59:59.635615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:19.870 [2024-11-25 12:59:59.635625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.635629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.635632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e8550) 00:25:19.870 [2024-11-25 12:59:59.635639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.870 [2024-11-25 12:59:59.635649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a100, cid 0, qid 0 00:25:19.870 [2024-11-25 12:59:59.635811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.870 [2024-11-25 12:59:59.635817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.870 [2024-11-25 12:59:59.635821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.635825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a100) on tqpair=0x13e8550 00:25:19.870 [2024-11-25 12:59:59.635829] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:19.870 [2024-11-25 12:59:59.635834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:19.870 [2024-11-25 12:59:59.635842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:19.870 [2024-11-25 12:59:59.635950] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:19.870 [2024-11-25 12:59:59.635955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:19.870 [2024-11-25 12:59:59.635964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.635968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.635971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e8550) 00:25:19.870 [2024-11-25 12:59:59.635978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.870 [2024-11-25 12:59:59.635989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a100, cid 0, qid 0 00:25:19.870 [2024-11-25 12:59:59.636200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.870 [2024-11-25 12:59:59.636207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.870 [2024-11-25 12:59:59.636210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.636214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a100) on tqpair=0x13e8550 00:25:19.870 [2024-11-25 12:59:59.636223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:19.870 [2024-11-25 12:59:59.636232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.636236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.636240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e8550) 00:25:19.870 [2024-11-25 12:59:59.636247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.870 [2024-11-25 12:59:59.636256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a100, cid 0, qid 0 00:25:19.870 [2024-11-25 12:59:59.636436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.870 [2024-11-25 12:59:59.636442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.870 [2024-11-25 12:59:59.636446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.636450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a100) on tqpair=0x13e8550 00:25:19.870 [2024-11-25 12:59:59.636454] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:19.870 [2024-11-25 12:59:59.636459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:19.870 [2024-11-25 12:59:59.636466] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:19.870 [2024-11-25 12:59:59.636474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:19.870 [2024-11-25 12:59:59.636482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.636486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e8550) 00:25:19.870 [2024-11-25 12:59:59.636493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.870 [2024-11-25 12:59:59.636503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a100, cid 0, qid 0 00:25:19.870 [2024-11-25 12:59:59.636698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.870 [2024-11-25 12:59:59.636705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.870 [2024-11-25 12:59:59.636708] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.636712] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e8550): datao=0, datal=4096, cccid=0 00:25:19.870 [2024-11-25 12:59:59.636717] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x144a100) on tqpair(0x13e8550): expected_datao=0, payload_size=4096 00:25:19.870 [2024-11-25 12:59:59.636722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.636737] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.636742] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.870 [2024-11-25 12:59:59.677057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.870 [2024-11-25 12:59:59.677061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a100) on tqpair=0x13e8550 00:25:19.870 [2024-11-25 12:59:59.677072] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:19.870 [2024-11-25 12:59:59.677077] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:19.870 [2024-11-25 12:59:59.677085] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:19.870 [2024-11-25 12:59:59.677093] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:19.870 [2024-11-25 12:59:59.677097] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:19.870 [2024-11-25 12:59:59.677102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:19.870 [2024-11-25 12:59:59.677113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:19.870 [2024-11-25 12:59:59.677120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e8550) 00:25:19.870 [2024-11-25 12:59:59.677135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:19.870 [2024-11-25 12:59:59.677147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a100, cid 0, qid 0 00:25:19.870 [2024-11-25 12:59:59.677380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.870 [2024-11-25 12:59:59.677386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.870 [2024-11-25 12:59:59.677389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a100) on tqpair=0x13e8550 00:25:19.870 [2024-11-25 12:59:59.677400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13e8550) 00:25:19.870 [2024-11-25 12:59:59.677414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.870 [2024-11-25 12:59:59.677420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13e8550) 00:25:19.870 [2024-11-25 12:59:59.677433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.870 [2024-11-25 12:59:59.677439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13e8550) 00:25:19.870 [2024-11-25 12:59:59.677452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.870 [2024-11-25 12:59:59.677458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.870 [2024-11-25 12:59:59.677466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e8550) 00:25:19.870 [2024-11-25 12:59:59.677471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.870 [2024-11-25 12:59:59.677476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:19.870 [2024-11-25 12:59:59.677484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:19.870 [2024-11-25 12:59:59.677491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.677496] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e8550) 00:25:19.871 [2024-11-25 12:59:59.677503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.871 [2024-11-25 12:59:59.677515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a100, cid 0, qid 0 00:25:19.871 [2024-11-25 12:59:59.677520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a280, cid 1, qid 0 00:25:19.871 [2024-11-25 12:59:59.677525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a400, cid 2, qid 0 00:25:19.871 [2024-11-25 12:59:59.677529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a580, cid 3, qid 0 00:25:19.871 [2024-11-25 12:59:59.677534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a700, cid 4, qid 0 00:25:19.871 [2024-11-25 12:59:59.677678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.871 [2024-11-25 12:59:59.677685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.871 [2024-11-25 12:59:59.677688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.677692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a700) on tqpair=0x13e8550 00:25:19.871 [2024-11-25 12:59:59.677699] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:19.871 [2024-11-25 12:59:59.677705] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:19.871 [2024-11-25 12:59:59.677715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.677719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e8550) 00:25:19.871 [2024-11-25 12:59:59.677725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.871 [2024-11-25 12:59:59.677736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a700, cid 4, qid 0 00:25:19.871 [2024-11-25 12:59:59.677840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.871 [2024-11-25 12:59:59.677846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.871 [2024-11-25 12:59:59.677849] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.677853] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e8550): datao=0, datal=4096, cccid=4 00:25:19.871 [2024-11-25 12:59:59.677857] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x144a700) on tqpair(0x13e8550): expected_datao=0, payload_size=4096 00:25:19.871 [2024-11-25 12:59:59.677866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.677883] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.677887] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.678076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.871 [2024-11-25 12:59:59.678083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.871 [2024-11-25 12:59:59.678086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.678090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a700) on tqpair=0x13e8550 00:25:19.871 [2024-11-25 12:59:59.678101] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:19.871 [2024-11-25 12:59:59.678123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.678127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e8550) 00:25:19.871 [2024-11-25 12:59:59.678134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.871 [2024-11-25 12:59:59.678141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.678147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.678151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13e8550) 00:25:19.871 [2024-11-25 12:59:59.678157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.871 [2024-11-25 12:59:59.678170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a700, cid 4, qid 0 00:25:19.871 [2024-11-25 12:59:59.678176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a880, cid 5, qid 0 00:25:19.871 [2024-11-25 12:59:59.678424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.871 [2024-11-25 12:59:59.678430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.871 [2024-11-25 12:59:59.678434] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.678438] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e8550): datao=0, datal=1024, cccid=4 00:25:19.871 [2024-11-25 12:59:59.678442] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x144a700) on tqpair(0x13e8550): expected_datao=0, payload_size=1024 00:25:19.871 [2024-11-25 12:59:59.678446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.678453] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.678456] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.678462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.871 [2024-11-25 12:59:59.678468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.871 [2024-11-25 12:59:59.678471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.678475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a880) on tqpair=0x13e8550 00:25:19.871 [2024-11-25 12:59:59.725868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.871 [2024-11-25 12:59:59.725880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.871 [2024-11-25 12:59:59.725884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.725888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a700) on tqpair=0x13e8550 00:25:19.871 [2024-11-25 12:59:59.725901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.725905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e8550) 00:25:19.871 [2024-11-25 12:59:59.725913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.871 [2024-11-25 12:59:59.725930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a700, cid 4, qid 0 00:25:19.871 [2024-11-25 12:59:59.726230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.871 [2024-11-25 12:59:59.726237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.871 [2024-11-25 12:59:59.726241] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.726245] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e8550): datao=0, datal=3072, cccid=4 00:25:19.871 [2024-11-25 12:59:59.726249] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x144a700) on tqpair(0x13e8550): expected_datao=0, payload_size=3072 00:25:19.871 [2024-11-25 12:59:59.726254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.726261] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.726264] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.726409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.871 [2024-11-25 12:59:59.726415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.871 [2024-11-25 12:59:59.726418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.726422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a700) on tqpair=0x13e8550 00:25:19.871 [2024-11-25 12:59:59.726434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.726438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13e8550) 00:25:19.871 [2024-11-25 12:59:59.726444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.871 [2024-11-25 12:59:59.726458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a700, cid 4, qid 0 00:25:19.871 [2024-11-25 12:59:59.726669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:19.871 [2024-11-25 12:59:59.726675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:19.871 [2024-11-25 12:59:59.726678] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.726682] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13e8550): datao=0, datal=8, cccid=4 00:25:19.871 [2024-11-25 12:59:59.726686] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x144a700) on tqpair(0x13e8550): expected_datao=0, payload_size=8 00:25:19.871 [2024-11-25 12:59:59.726691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.726697] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.726701] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.768006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.871 [2024-11-25 12:59:59.768015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.871 [2024-11-25 12:59:59.768018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.871 [2024-11-25 12:59:59.768022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a700) on tqpair=0x13e8550 00:25:19.871 ===================================================== 00:25:19.871 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:19.871 ===================================================== 00:25:19.871 Controller Capabilities/Features 00:25:19.871 ================================ 00:25:19.871 Vendor ID: 0000 00:25:19.871 Subsystem Vendor ID: 0000 00:25:19.871 Serial Number: .................... 00:25:19.871 Model Number: ........................................ 00:25:19.871 Firmware Version: 25.01 00:25:19.871 Recommended Arb Burst: 0 00:25:19.871 IEEE OUI Identifier: 00 00 00 00:25:19.871 Multi-path I/O 00:25:19.871 May have multiple subsystem ports: No 00:25:19.871 May have multiple controllers: No 00:25:19.871 Associated with SR-IOV VF: No 00:25:19.871 Max Data Transfer Size: 131072 00:25:19.871 Max Number of Namespaces: 0 00:25:19.871 Max Number of I/O Queues: 1024 00:25:19.871 NVMe Specification Version (VS): 1.3 00:25:19.871 NVMe Specification Version (Identify): 1.3 00:25:19.871 Maximum Queue Entries: 128 00:25:19.871 Contiguous Queues Required: Yes 00:25:19.871 Arbitration Mechanisms Supported 00:25:19.871 Weighted Round Robin: Not Supported 00:25:19.871 Vendor Specific: Not Supported 00:25:19.871 Reset Timeout: 15000 ms 00:25:19.871 Doorbell Stride: 4 bytes 00:25:19.871 NVM Subsystem Reset: Not Supported 00:25:19.871 Command Sets Supported 00:25:19.871 NVM Command Set: Supported 00:25:19.871 Boot Partition: Not Supported 00:25:19.871 Memory Page Size Minimum: 4096 bytes 00:25:19.872 Memory Page Size Maximum: 4096 bytes 00:25:19.872 Persistent Memory Region: Not Supported 00:25:19.872 Optional Asynchronous Events Supported 00:25:19.872 Namespace Attribute Notices: Not Supported 00:25:19.872 Firmware Activation Notices: Not Supported 00:25:19.872 ANA Change Notices: Not Supported 00:25:19.872 PLE Aggregate Log Change Notices: Not Supported 00:25:19.872 LBA Status Info Alert Notices: Not Supported 00:25:19.872 EGE Aggregate Log Change Notices: Not Supported 00:25:19.872 Normal NVM Subsystem Shutdown event: Not Supported 00:25:19.872 Zone Descriptor Change Notices: Not Supported 00:25:19.872 Discovery Log Change Notices: Supported 00:25:19.872 Controller Attributes 00:25:19.872 128-bit Host Identifier: Not Supported 00:25:19.872 Non-Operational Permissive Mode: Not Supported 00:25:19.872 NVM Sets: Not Supported 00:25:19.872 Read Recovery Levels: Not Supported 00:25:19.872 Endurance Groups: Not Supported 00:25:19.872 Predictable Latency Mode: Not Supported 00:25:19.872 Traffic Based Keep ALive: Not Supported 00:25:19.872 Namespace Granularity: Not Supported 00:25:19.872 SQ Associations: Not Supported 00:25:19.872 UUID List: Not Supported 00:25:19.872 Multi-Domain Subsystem: Not Supported 00:25:19.872 Fixed Capacity Management: Not Supported 00:25:19.872 Variable Capacity Management: Not Supported 00:25:19.872 Delete Endurance Group: Not Supported 00:25:19.872 Delete NVM Set: Not Supported 00:25:19.872 Extended LBA Formats Supported: Not Supported 00:25:19.872 Flexible Data Placement Supported: Not Supported 00:25:19.872 00:25:19.872 Controller Memory Buffer Support 00:25:19.872 ================================ 00:25:19.872 Supported: No 00:25:19.872 00:25:19.872 Persistent Memory Region Support 00:25:19.872 ================================ 00:25:19.872 Supported: No 00:25:19.872 00:25:19.872 Admin Command Set Attributes 00:25:19.872 ============================ 00:25:19.872 Security Send/Receive: Not Supported 00:25:19.872 Format NVM: Not Supported 00:25:19.872 Firmware Activate/Download: Not Supported 00:25:19.872 Namespace Management: Not Supported 00:25:19.872 Device Self-Test: Not Supported 00:25:19.872 Directives: Not Supported 00:25:19.872 NVMe-MI: Not Supported 00:25:19.872 Virtualization Management: Not Supported 00:25:19.872 Doorbell Buffer Config: Not Supported 00:25:19.872 Get LBA Status Capability: Not Supported 00:25:19.872 Command & Feature Lockdown Capability: Not Supported 00:25:19.872 Abort Command Limit: 1 00:25:19.872 Async Event Request Limit: 4 00:25:19.872 Number of Firmware Slots: N/A 00:25:19.872 Firmware Slot 1 Read-Only: N/A 00:25:19.872 Firmware Activation Without Reset: N/A 00:25:19.872 Multiple Update Detection Support: N/A 00:25:19.872 Firmware Update Granularity: No Information Provided 00:25:19.872 Per-Namespace SMART Log: No 00:25:19.872 Asymmetric Namespace Access Log Page: Not Supported 00:25:19.872 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:19.872 Command Effects Log Page: Not Supported 00:25:19.872 Get Log Page Extended Data: Supported 00:25:19.872 Telemetry Log Pages: Not Supported 00:25:19.872 Persistent Event Log Pages: Not Supported 00:25:19.872 Supported Log Pages Log Page: May Support 00:25:19.872 Commands Supported & Effects Log Page: Not Supported 00:25:19.872 Feature Identifiers & Effects Log Page:May Support 00:25:19.872 NVMe-MI Commands & Effects Log Page: May Support 00:25:19.872 Data Area 4 for Telemetry Log: Not Supported 00:25:19.872 Error Log Page Entries Supported: 128 00:25:19.872 Keep Alive: Not Supported 00:25:19.872 00:25:19.872 NVM Command Set Attributes 00:25:19.872 ========================== 00:25:19.872 Submission Queue Entry Size 00:25:19.872 Max: 1 00:25:19.872 Min: 1 00:25:19.872 Completion Queue Entry Size 00:25:19.872 Max: 1 00:25:19.872 Min: 1 00:25:19.872 Number of Namespaces: 0 00:25:19.872 Compare Command: Not Supported 00:25:19.872 Write Uncorrectable Command: Not Supported 00:25:19.872 Dataset Management Command: Not Supported 00:25:19.872 Write Zeroes Command: Not Supported 00:25:19.872 Set Features Save Field: Not Supported 00:25:19.872 Reservations: Not Supported 00:25:19.872 Timestamp: Not Supported 00:25:19.872 Copy: Not Supported 00:25:19.872 Volatile Write Cache: Not Present 00:25:19.872 Atomic Write Unit (Normal): 1 00:25:19.872 Atomic Write Unit (PFail): 1 00:25:19.872 Atomic Compare & Write Unit: 1 00:25:19.872 Fused Compare & Write: Supported 00:25:19.872 Scatter-Gather List 00:25:19.872 SGL Command Set: Supported 00:25:19.872 SGL Keyed: Supported 00:25:19.872 SGL Bit Bucket Descriptor: Not Supported 00:25:19.872 SGL Metadata Pointer: Not Supported 00:25:19.872 Oversized SGL: Not Supported 00:25:19.872 SGL Metadata Address: Not Supported 00:25:19.872 SGL Offset: Supported 00:25:19.872 Transport SGL Data Block: Not Supported 00:25:19.872 Replay Protected Memory Block: Not Supported 00:25:19.872 00:25:19.872 Firmware Slot Information 00:25:19.872 ========================= 00:25:19.872 Active slot: 0 00:25:19.872 00:25:19.872 00:25:19.872 Error Log 00:25:19.872 ========= 00:25:19.872 00:25:19.872 Active Namespaces 00:25:19.872 ================= 00:25:19.872 Discovery Log Page 00:25:19.872 ================== 00:25:19.872 Generation Counter: 2 00:25:19.872 Number of Records: 2 00:25:19.872 Record Format: 0 00:25:19.872 00:25:19.872 Discovery Log Entry 0 00:25:19.872 ---------------------- 00:25:19.872 Transport Type: 3 (TCP) 00:25:19.872 Address Family: 1 (IPv4) 00:25:19.872 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:19.872 Entry Flags: 00:25:19.872 Duplicate Returned Information: 1 00:25:19.872 Explicit Persistent Connection Support for Discovery: 1 00:25:19.872 Transport Requirements: 00:25:19.872 Secure Channel: Not Required 00:25:19.872 Port ID: 0 (0x0000) 00:25:19.872 Controller ID: 65535 (0xffff) 00:25:19.872 Admin Max SQ Size: 128 00:25:19.872 Transport Service Identifier: 4420 00:25:19.872 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:19.872 Transport Address: 10.0.0.2 00:25:19.872 Discovery Log Entry 1 00:25:19.872 ---------------------- 00:25:19.872 Transport Type: 3 (TCP) 00:25:19.872 Address Family: 1 (IPv4) 00:25:19.872 Subsystem Type: 2 (NVM Subsystem) 00:25:19.872 Entry Flags: 00:25:19.872 Duplicate Returned Information: 0 00:25:19.872 Explicit Persistent Connection Support for Discovery: 0 00:25:19.872 Transport Requirements: 00:25:19.872 Secure Channel: Not Required 00:25:19.872 Port ID: 0 (0x0000) 00:25:19.872 Controller ID: 65535 (0xffff) 00:25:19.872 Admin Max SQ Size: 128 00:25:19.872 Transport Service Identifier: 4420 00:25:19.872 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:19.872 Transport Address: 10.0.0.2 [2024-11-25 12:59:59.768107] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:19.872 [2024-11-25 12:59:59.768117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a100) on tqpair=0x13e8550 00:25:19.872 [2024-11-25 12:59:59.768124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.872 [2024-11-25 12:59:59.768129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a280) on tqpair=0x13e8550 00:25:19.872 [2024-11-25 12:59:59.768134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.872 [2024-11-25 12:59:59.768139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a400) on tqpair=0x13e8550 00:25:19.872 [2024-11-25 12:59:59.768143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.872 [2024-11-25 12:59:59.768148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a580) on tqpair=0x13e8550 00:25:19.872 [2024-11-25 12:59:59.768153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.872 [2024-11-25 12:59:59.768163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.872 [2024-11-25 12:59:59.768167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.872 [2024-11-25 12:59:59.768170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e8550) 00:25:19.873 [2024-11-25 12:59:59.768178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.873 [2024-11-25 12:59:59.768191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a580, cid 3, qid 0 00:25:19.873 [2024-11-25 12:59:59.768289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:19.873 [2024-11-25 12:59:59.768295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:19.873 [2024-11-25 12:59:59.768299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:19.873 [2024-11-25 12:59:59.768303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a580) on tqpair=0x13e8550 00:25:19.873 [2024-11-25 12:59:59.768311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:19.873 [2024-11-25 12:59:59.768315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:19.873 [2024-11-25 12:59:59.768319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e8550) 00:25:19.873 [2024-11-25 12:59:59.768325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.873 [2024-11-25 12:59:59.768338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a580, cid 3, qid 0 00:25:20.139 [2024-11-25 12:59:59.768536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.139 [2024-11-25 12:59:59.768544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.139 [2024-11-25 12:59:59.768548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.768554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a580) on tqpair=0x13e8550 00:25:20.139 [2024-11-25 12:59:59.768559] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:20.139 [2024-11-25 12:59:59.768564] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:20.139 [2024-11-25 12:59:59.768573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.768577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.768581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e8550) 00:25:20.139 [2024-11-25 12:59:59.768588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.139 [2024-11-25 12:59:59.768598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a580, cid 3, qid 0 00:25:20.139 [2024-11-25 12:59:59.768760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.139 [2024-11-25 12:59:59.768767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.139 [2024-11-25 12:59:59.768770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.768774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a580) on tqpair=0x13e8550 00:25:20.139 [2024-11-25 12:59:59.768784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.768788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.768791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e8550) 00:25:20.139 [2024-11-25 12:59:59.768798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.139 [2024-11-25 12:59:59.768808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a580, cid 3, qid 0 00:25:20.139 [2024-11-25 12:59:59.769031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.139 [2024-11-25 12:59:59.769038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.139 [2024-11-25 12:59:59.769041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a580) on tqpair=0x13e8550 00:25:20.139 [2024-11-25 12:59:59.769054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e8550) 00:25:20.139 [2024-11-25 12:59:59.769068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.139 [2024-11-25 12:59:59.769079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a580, cid 3, qid 0 00:25:20.139 [2024-11-25 12:59:59.769259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.139 [2024-11-25 12:59:59.769265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.139 [2024-11-25 12:59:59.769271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a580) on tqpair=0x13e8550 00:25:20.139 [2024-11-25 12:59:59.769284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e8550) 00:25:20.139 [2024-11-25 12:59:59.769298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.139 [2024-11-25 12:59:59.769309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a580, cid 3, qid 0 00:25:20.139 [2024-11-25 12:59:59.769490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.139 [2024-11-25 12:59:59.769496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.139 [2024-11-25 12:59:59.769499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a580) on tqpair=0x13e8550 00:25:20.139 [2024-11-25 12:59:59.769513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e8550) 00:25:20.139 [2024-11-25 12:59:59.769527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.139 [2024-11-25 12:59:59.769537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a580, cid 3, qid 0 00:25:20.139 [2024-11-25 12:59:59.769714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.139 [2024-11-25 12:59:59.769720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.139 [2024-11-25 12:59:59.769724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a580) on tqpair=0x13e8550 00:25:20.139 [2024-11-25 12:59:59.769737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.769745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13e8550) 00:25:20.139 [2024-11-25 12:59:59.769751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.139 [2024-11-25 12:59:59.769761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x144a580, cid 3, qid 0 00:25:20.139 [2024-11-25 12:59:59.773878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.139 [2024-11-25 12:59:59.773886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.139 [2024-11-25 12:59:59.773889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.773893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x144a580) on tqpair=0x13e8550 00:25:20.139 [2024-11-25 12:59:59.773901] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:25:20.139 00:25:20.139 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:20.139 [2024-11-25 12:59:59.819020] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:25:20.139 [2024-11-25 12:59:59.819088] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734441 ] 00:25:20.139 [2024-11-25 12:59:59.871170] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:20.139 [2024-11-25 12:59:59.871216] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:20.139 [2024-11-25 12:59:59.871221] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:20.139 [2024-11-25 12:59:59.871234] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:20.139 [2024-11-25 12:59:59.871245] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:20.139 [2024-11-25 12:59:59.875073] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:20.139 [2024-11-25 12:59:59.875100] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x72e550 0 00:25:20.139 [2024-11-25 12:59:59.882872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:20.139 [2024-11-25 12:59:59.882884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:20.139 [2024-11-25 12:59:59.882888] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:20.139 [2024-11-25 12:59:59.882892] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:20.139 [2024-11-25 12:59:59.882920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.882926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.882930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x72e550) 00:25:20.139 [2024-11-25 12:59:59.882942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:20.139 [2024-11-25 12:59:59.882960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790100, cid 0, qid 0 00:25:20.139 [2024-11-25 12:59:59.890871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.139 [2024-11-25 12:59:59.890880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.139 [2024-11-25 12:59:59.890884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.139 [2024-11-25 12:59:59.890889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790100) on tqpair=0x72e550 00:25:20.139 [2024-11-25 12:59:59.890900] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:20.139 [2024-11-25 12:59:59.890907] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:20.140 [2024-11-25 12:59:59.890912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:20.140 [2024-11-25 12:59:59.890924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.890929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.890932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x72e550) 00:25:20.140 [2024-11-25 12:59:59.890940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.140 [2024-11-25 12:59:59.890954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790100, cid 0, qid 0 00:25:20.140 [2024-11-25 12:59:59.891157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.140 [2024-11-25 12:59:59.891164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.140 [2024-11-25 12:59:59.891168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790100) on tqpair=0x72e550 00:25:20.140 [2024-11-25 12:59:59.891177] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:20.140 [2024-11-25 12:59:59.891185] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:20.140 [2024-11-25 12:59:59.891195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x72e550) 00:25:20.140 [2024-11-25 12:59:59.891210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.140 [2024-11-25 12:59:59.891222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790100, cid 0, qid 0 00:25:20.140 [2024-11-25 12:59:59.891414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.140 [2024-11-25 12:59:59.891421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.140 [2024-11-25 12:59:59.891424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790100) on tqpair=0x72e550 00:25:20.140 [2024-11-25 12:59:59.891434] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:20.140 [2024-11-25 12:59:59.891442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:20.140 [2024-11-25 12:59:59.891449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x72e550) 00:25:20.140 [2024-11-25 12:59:59.891463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.140 [2024-11-25 12:59:59.891474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790100, cid 0, qid 0 00:25:20.140 [2024-11-25 12:59:59.891527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.140 [2024-11-25 12:59:59.891533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.140 [2024-11-25 12:59:59.891537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790100) on tqpair=0x72e550 00:25:20.140 [2024-11-25 12:59:59.891546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:20.140 [2024-11-25 12:59:59.891555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x72e550) 00:25:20.140 [2024-11-25 12:59:59.891569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.140 [2024-11-25 12:59:59.891580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790100, cid 0, qid 0 00:25:20.140 [2024-11-25 12:59:59.891632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.140 [2024-11-25 12:59:59.891639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.140 [2024-11-25 12:59:59.891642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790100) on tqpair=0x72e550 00:25:20.140 [2024-11-25 12:59:59.891651] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:20.140 [2024-11-25 12:59:59.891656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:20.140 [2024-11-25 12:59:59.891664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:20.140 [2024-11-25 12:59:59.891774] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:20.140 [2024-11-25 12:59:59.891779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:20.140 [2024-11-25 12:59:59.891787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x72e550) 00:25:20.140 [2024-11-25 12:59:59.891801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.140 [2024-11-25 12:59:59.891813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790100, cid 0, qid 0 00:25:20.140 [2024-11-25 12:59:59.891870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.140 [2024-11-25 12:59:59.891877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.140 [2024-11-25 12:59:59.891881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790100) on tqpair=0x72e550 00:25:20.140 [2024-11-25 12:59:59.891890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:20.140 [2024-11-25 12:59:59.891899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.891907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x72e550) 00:25:20.140 [2024-11-25 12:59:59.891914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.140 [2024-11-25 12:59:59.891924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790100, cid 0, qid 0 00:25:20.140 [2024-11-25 12:59:59.892000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.140 [2024-11-25 12:59:59.892006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.140 [2024-11-25 12:59:59.892010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.892014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790100) on tqpair=0x72e550 00:25:20.140 [2024-11-25 12:59:59.892018] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:20.140 [2024-11-25 12:59:59.892023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:20.140 [2024-11-25 12:59:59.892031] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:20.140 [2024-11-25 12:59:59.892039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:20.140 [2024-11-25 12:59:59.892048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.892052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x72e550) 00:25:20.140 [2024-11-25 12:59:59.892059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.140 [2024-11-25 12:59:59.892070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790100, cid 0, qid 0 00:25:20.140 [2024-11-25 12:59:59.892284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:20.140 [2024-11-25 12:59:59.892292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:20.140 [2024-11-25 12:59:59.892296] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.892299] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x72e550): datao=0, datal=4096, cccid=0 00:25:20.140 [2024-11-25 12:59:59.892307] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x790100) on tqpair(0x72e550): expected_datao=0, payload_size=4096 00:25:20.140 [2024-11-25 12:59:59.892311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.892352] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.892356] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.933047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.140 [2024-11-25 12:59:59.933057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.140 [2024-11-25 12:59:59.933060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.933064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790100) on tqpair=0x72e550 00:25:20.140 [2024-11-25 12:59:59.933072] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:20.140 [2024-11-25 12:59:59.933077] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:20.140 [2024-11-25 12:59:59.933082] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:20.140 [2024-11-25 12:59:59.933091] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:20.140 [2024-11-25 12:59:59.933096] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:20.140 [2024-11-25 12:59:59.933101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:20.140 [2024-11-25 12:59:59.933111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:20.140 [2024-11-25 12:59:59.933118] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.933122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.140 [2024-11-25 12:59:59.933125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x72e550) 00:25:20.140 [2024-11-25 12:59:59.933133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:20.140 [2024-11-25 12:59:59.933145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790100, cid 0, qid 0 00:25:20.141 [2024-11-25 12:59:59.933356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.141 [2024-11-25 12:59:59.933362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.141 [2024-11-25 12:59:59.933365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790100) on tqpair=0x72e550 00:25:20.141 [2024-11-25 12:59:59.933377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x72e550) 00:25:20.141 [2024-11-25 12:59:59.933390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.141 [2024-11-25 12:59:59.933397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x72e550) 00:25:20.141 [2024-11-25 12:59:59.933410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.141 [2024-11-25 12:59:59.933416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x72e550) 00:25:20.141 [2024-11-25 12:59:59.933431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.141 [2024-11-25 12:59:59.933437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x72e550) 00:25:20.141 [2024-11-25 12:59:59.933450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.141 [2024-11-25 12:59:59.933455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:20.141 [2024-11-25 12:59:59.933463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:20.141 [2024-11-25 12:59:59.933469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x72e550) 00:25:20.141 [2024-11-25 12:59:59.933480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.141 [2024-11-25 12:59:59.933492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790100, cid 0, qid 0 00:25:20.141 [2024-11-25 12:59:59.933497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790280, cid 1, qid 0 00:25:20.141 [2024-11-25 12:59:59.933502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790400, cid 2, qid 0 00:25:20.141 [2024-11-25 12:59:59.933507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790580, cid 3, qid 0 00:25:20.141 [2024-11-25 12:59:59.933511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790700, cid 4, qid 0 00:25:20.141 [2024-11-25 12:59:59.933703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.141 [2024-11-25 12:59:59.933710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.141 [2024-11-25 12:59:59.933713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790700) on tqpair=0x72e550 00:25:20.141 [2024-11-25 12:59:59.933724] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:20.141 [2024-11-25 12:59:59.933729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:20.141 [2024-11-25 12:59:59.933737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:20.141 [2024-11-25 12:59:59.933743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:20.141 [2024-11-25 12:59:59.933749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x72e550) 00:25:20.141 [2024-11-25 12:59:59.933763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:20.141 [2024-11-25 12:59:59.933774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790700, cid 4, qid 0 00:25:20.141 [2024-11-25 12:59:59.933945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.141 [2024-11-25 12:59:59.933952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.141 [2024-11-25 12:59:59.933955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.933959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790700) on tqpair=0x72e550 00:25:20.141 [2024-11-25 12:59:59.934024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:20.141 [2024-11-25 12:59:59.934033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:20.141 [2024-11-25 12:59:59.934040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.934044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x72e550) 00:25:20.141 [2024-11-25 12:59:59.934051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.141 [2024-11-25 12:59:59.934062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790700, cid 4, qid 0 00:25:20.141 [2024-11-25 12:59:59.934270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:20.141 [2024-11-25 12:59:59.934277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:20.141 [2024-11-25 12:59:59.934281] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.934284] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x72e550): datao=0, datal=4096, cccid=4 00:25:20.141 [2024-11-25 12:59:59.934289] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x790700) on tqpair(0x72e550): expected_datao=0, payload_size=4096 00:25:20.141 [2024-11-25 12:59:59.934293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.934300] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.934304] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.934474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.141 [2024-11-25 12:59:59.934481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.141 [2024-11-25 12:59:59.934484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.934488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790700) on tqpair=0x72e550 00:25:20.141 [2024-11-25 12:59:59.934501] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:20.141 [2024-11-25 12:59:59.934510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:20.141 [2024-11-25 12:59:59.934519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:20.141 [2024-11-25 12:59:59.934526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.934530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x72e550) 00:25:20.141 [2024-11-25 12:59:59.934536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.141 [2024-11-25 12:59:59.934547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790700, cid 4, qid 0 00:25:20.141 [2024-11-25 12:59:59.937869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:20.141 [2024-11-25 12:59:59.937877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:20.141 [2024-11-25 12:59:59.937881] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.937885] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x72e550): datao=0, datal=4096, cccid=4 00:25:20.141 [2024-11-25 12:59:59.937889] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x790700) on tqpair(0x72e550): expected_datao=0, payload_size=4096 00:25:20.141 [2024-11-25 12:59:59.937894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.937900] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.937904] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.937910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.141 [2024-11-25 12:59:59.937918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.141 [2024-11-25 12:59:59.937921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.937925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790700) on tqpair=0x72e550 00:25:20.141 [2024-11-25 12:59:59.937937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:20.141 [2024-11-25 12:59:59.937947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:20.141 [2024-11-25 12:59:59.937954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.937958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x72e550) 00:25:20.141 [2024-11-25 12:59:59.937965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.141 [2024-11-25 12:59:59.937977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790700, cid 4, qid 0 00:25:20.141 [2024-11-25 12:59:59.938140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:20.141 [2024-11-25 12:59:59.938147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:20.141 [2024-11-25 12:59:59.938151] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.938154] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x72e550): datao=0, datal=4096, cccid=4 00:25:20.141 [2024-11-25 12:59:59.938158] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x790700) on tqpair(0x72e550): expected_datao=0, payload_size=4096 00:25:20.141 [2024-11-25 12:59:59.938163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.141 [2024-11-25 12:59:59.938170] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938173] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.142 [2024-11-25 12:59:59.938358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.142 [2024-11-25 12:59:59.938361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790700) on tqpair=0x72e550 00:25:20.142 [2024-11-25 12:59:59.938372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:20.142 [2024-11-25 12:59:59.938380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:20.142 [2024-11-25 12:59:59.938388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:20.142 [2024-11-25 12:59:59.938394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:20.142 [2024-11-25 12:59:59.938399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:20.142 [2024-11-25 12:59:59.938404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:20.142 [2024-11-25 12:59:59.938409] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:20.142 [2024-11-25 12:59:59.938414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:20.142 [2024-11-25 12:59:59.938419] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:20.142 [2024-11-25 12:59:59.938432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x72e550) 00:25:20.142 [2024-11-25 12:59:59.938445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.142 [2024-11-25 12:59:59.938452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x72e550) 00:25:20.142 [2024-11-25 12:59:59.938465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.142 [2024-11-25 12:59:59.938478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790700, cid 4, qid 0 00:25:20.142 [2024-11-25 12:59:59.938483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790880, cid 5, qid 0 00:25:20.142 [2024-11-25 12:59:59.938566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.142 [2024-11-25 12:59:59.938573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.142 [2024-11-25 12:59:59.938576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790700) on tqpair=0x72e550 00:25:20.142 [2024-11-25 12:59:59.938587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.142 [2024-11-25 12:59:59.938592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.142 [2024-11-25 12:59:59.938596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790880) on tqpair=0x72e550 00:25:20.142 [2024-11-25 12:59:59.938609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x72e550) 00:25:20.142 [2024-11-25 12:59:59.938619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.142 [2024-11-25 12:59:59.938629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790880, cid 5, qid 0 00:25:20.142 [2024-11-25 12:59:59.938685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.142 [2024-11-25 12:59:59.938691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.142 [2024-11-25 12:59:59.938695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790880) on tqpair=0x72e550 00:25:20.142 [2024-11-25 12:59:59.938708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x72e550) 00:25:20.142 [2024-11-25 12:59:59.938718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.142 [2024-11-25 12:59:59.938728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790880, cid 5, qid 0 00:25:20.142 [2024-11-25 12:59:59.938780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.142 [2024-11-25 12:59:59.938786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.142 [2024-11-25 12:59:59.938790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790880) on tqpair=0x72e550 00:25:20.142 [2024-11-25 12:59:59.938803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x72e550) 00:25:20.142 [2024-11-25 12:59:59.938813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.142 [2024-11-25 12:59:59.938823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790880, cid 5, qid 0 00:25:20.142 [2024-11-25 12:59:59.938874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.142 [2024-11-25 12:59:59.938881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.142 [2024-11-25 12:59:59.938884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790880) on tqpair=0x72e550 00:25:20.142 [2024-11-25 12:59:59.938902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x72e550) 00:25:20.142 [2024-11-25 12:59:59.938912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.142 [2024-11-25 12:59:59.938920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x72e550) 00:25:20.142 [2024-11-25 12:59:59.938930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.142 [2024-11-25 12:59:59.938937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x72e550) 00:25:20.142 [2024-11-25 12:59:59.938947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.142 [2024-11-25 12:59:59.938954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.938958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x72e550) 00:25:20.142 [2024-11-25 12:59:59.938964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.142 [2024-11-25 12:59:59.938976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790880, cid 5, qid 0 00:25:20.142 [2024-11-25 12:59:59.938981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790700, cid 4, qid 0 00:25:20.142 [2024-11-25 12:59:59.938986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790a00, cid 6, qid 0 00:25:20.142 [2024-11-25 12:59:59.938991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790b80, cid 7, qid 0 00:25:20.142 [2024-11-25 12:59:59.939288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:20.142 [2024-11-25 12:59:59.939295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:20.142 [2024-11-25 12:59:59.939298] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939302] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x72e550): datao=0, datal=8192, cccid=5 00:25:20.142 [2024-11-25 12:59:59.939307] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x790880) on tqpair(0x72e550): expected_datao=0, payload_size=8192 00:25:20.142 [2024-11-25 12:59:59.939311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939415] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939420] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:20.142 [2024-11-25 12:59:59.939431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:20.142 [2024-11-25 12:59:59.939435] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939438] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x72e550): datao=0, datal=512, cccid=4 00:25:20.142 [2024-11-25 12:59:59.939443] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x790700) on tqpair(0x72e550): expected_datao=0, payload_size=512 00:25:20.142 [2024-11-25 12:59:59.939449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939455] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939459] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:20.142 [2024-11-25 12:59:59.939470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:20.142 [2024-11-25 12:59:59.939474] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939477] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x72e550): datao=0, datal=512, cccid=6 00:25:20.142 [2024-11-25 12:59:59.939482] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x790a00) on tqpair(0x72e550): expected_datao=0, payload_size=512 00:25:20.142 [2024-11-25 12:59:59.939486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939492] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939496] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:20.142 [2024-11-25 12:59:59.939507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:20.142 [2024-11-25 12:59:59.939510] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:20.142 [2024-11-25 12:59:59.939514] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x72e550): datao=0, datal=4096, cccid=7 00:25:20.142 [2024-11-25 12:59:59.939518] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x790b80) on tqpair(0x72e550): expected_datao=0, payload_size=4096 00:25:20.143 [2024-11-25 12:59:59.939523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.143 [2024-11-25 12:59:59.939539] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:20.143 [2024-11-25 12:59:59.939543] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:20.143 [2024-11-25 12:59:59.939675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.143 [2024-11-25 12:59:59.939681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.143 [2024-11-25 12:59:59.939684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.143 [2024-11-25 12:59:59.939688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790880) on tqpair=0x72e550 00:25:20.143 [2024-11-25 12:59:59.939702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.143 [2024-11-25 12:59:59.939708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.143 [2024-11-25 12:59:59.939711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.143 [2024-11-25 12:59:59.939715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790700) on tqpair=0x72e550 00:25:20.143 [2024-11-25 12:59:59.939725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.143 [2024-11-25 12:59:59.939731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.143 [2024-11-25 12:59:59.939734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.143 [2024-11-25 12:59:59.939738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790a00) on tqpair=0x72e550 00:25:20.143 [2024-11-25 12:59:59.939745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.143 [2024-11-25 12:59:59.939751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.143 [2024-11-25 12:59:59.939755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.143 [2024-11-25 12:59:59.939758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790b80) on tqpair=0x72e550 00:25:20.143 ===================================================== 00:25:20.143 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:20.143 ===================================================== 00:25:20.143 Controller Capabilities/Features 00:25:20.143 ================================ 00:25:20.143 Vendor ID: 8086 00:25:20.143 Subsystem Vendor ID: 8086 00:25:20.143 Serial Number: SPDK00000000000001 00:25:20.143 Model Number: SPDK bdev Controller 00:25:20.143 Firmware Version: 25.01 00:25:20.143 Recommended Arb Burst: 6 00:25:20.143 IEEE OUI Identifier: e4 d2 5c 00:25:20.143 Multi-path I/O 00:25:20.143 May have multiple subsystem ports: Yes 00:25:20.143 May have multiple controllers: Yes 00:25:20.143 Associated with SR-IOV VF: No 00:25:20.143 Max Data Transfer Size: 131072 00:25:20.143 Max Number of Namespaces: 32 00:25:20.143 Max Number of I/O Queues: 127 00:25:20.143 NVMe Specification Version (VS): 1.3 00:25:20.143 NVMe Specification Version (Identify): 1.3 00:25:20.143 Maximum Queue Entries: 128 00:25:20.143 Contiguous Queues Required: Yes 00:25:20.143 Arbitration Mechanisms Supported 00:25:20.143 Weighted Round Robin: Not Supported 00:25:20.143 Vendor Specific: Not Supported 00:25:20.143 Reset Timeout: 15000 ms 00:25:20.143 Doorbell Stride: 4 bytes 00:25:20.143 NVM Subsystem Reset: Not Supported 00:25:20.143 Command Sets Supported 00:25:20.143 NVM Command Set: Supported 00:25:20.143 Boot Partition: Not Supported 00:25:20.143 Memory Page Size Minimum: 4096 bytes 00:25:20.143 Memory Page Size Maximum: 4096 bytes 00:25:20.143 Persistent Memory Region: Not Supported 00:25:20.143 Optional Asynchronous Events Supported 00:25:20.143 Namespace Attribute Notices: Supported 00:25:20.143 Firmware Activation Notices: Not Supported 00:25:20.143 ANA Change Notices: Not Supported 00:25:20.143 PLE Aggregate Log Change Notices: Not Supported 00:25:20.143 LBA Status Info Alert Notices: Not Supported 00:25:20.143 EGE Aggregate Log Change Notices: Not Supported 00:25:20.143 Normal NVM Subsystem Shutdown event: Not Supported 00:25:20.143 Zone Descriptor Change Notices: Not Supported 00:25:20.143 Discovery Log Change Notices: Not Supported 00:25:20.143 Controller Attributes 00:25:20.143 128-bit Host Identifier: Supported 00:25:20.143 Non-Operational Permissive Mode: Not Supported 00:25:20.143 NVM Sets: Not Supported 00:25:20.143 Read Recovery Levels: Not Supported 00:25:20.143 Endurance Groups: Not Supported 00:25:20.143 Predictable Latency Mode: Not Supported 00:25:20.143 Traffic Based Keep ALive: Not Supported 00:25:20.143 Namespace Granularity: Not Supported 00:25:20.143 SQ Associations: Not Supported 00:25:20.143 UUID List: Not Supported 00:25:20.143 Multi-Domain Subsystem: Not Supported 00:25:20.143 Fixed Capacity Management: Not Supported 00:25:20.143 Variable Capacity Management: Not Supported 00:25:20.143 Delete Endurance Group: Not Supported 00:25:20.143 Delete NVM Set: Not Supported 00:25:20.143 Extended LBA Formats Supported: Not Supported 00:25:20.143 Flexible Data Placement Supported: Not Supported 00:25:20.143 00:25:20.143 Controller Memory Buffer Support 00:25:20.143 ================================ 00:25:20.143 Supported: No 00:25:20.143 00:25:20.143 Persistent Memory Region Support 00:25:20.143 ================================ 00:25:20.143 Supported: No 00:25:20.143 00:25:20.143 Admin Command Set Attributes 00:25:20.143 ============================ 00:25:20.143 Security Send/Receive: Not Supported 00:25:20.143 Format NVM: Not Supported 00:25:20.143 Firmware Activate/Download: Not Supported 00:25:20.143 Namespace Management: Not Supported 00:25:20.143 Device Self-Test: Not Supported 00:25:20.143 Directives: Not Supported 00:25:20.143 NVMe-MI: Not Supported 00:25:20.143 Virtualization Management: Not Supported 00:25:20.143 Doorbell Buffer Config: Not Supported 00:25:20.143 Get LBA Status Capability: Not Supported 00:25:20.143 Command & Feature Lockdown Capability: Not Supported 00:25:20.143 Abort Command Limit: 4 00:25:20.143 Async Event Request Limit: 4 00:25:20.143 Number of Firmware Slots: N/A 00:25:20.143 Firmware Slot 1 Read-Only: N/A 00:25:20.143 Firmware Activation Without Reset: N/A 00:25:20.143 Multiple Update Detection Support: N/A 00:25:20.143 Firmware Update Granularity: No Information Provided 00:25:20.143 Per-Namespace SMART Log: No 00:25:20.143 Asymmetric Namespace Access Log Page: Not Supported 00:25:20.143 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:20.143 Command Effects Log Page: Supported 00:25:20.143 Get Log Page Extended Data: Supported 00:25:20.143 Telemetry Log Pages: Not Supported 00:25:20.143 Persistent Event Log Pages: Not Supported 00:25:20.143 Supported Log Pages Log Page: May Support 00:25:20.143 Commands Supported & Effects Log Page: Not Supported 00:25:20.143 Feature Identifiers & Effects Log Page:May Support 00:25:20.143 NVMe-MI Commands & Effects Log Page: May Support 00:25:20.143 Data Area 4 for Telemetry Log: Not Supported 00:25:20.143 Error Log Page Entries Supported: 128 00:25:20.143 Keep Alive: Supported 00:25:20.143 Keep Alive Granularity: 10000 ms 00:25:20.143 00:25:20.143 NVM Command Set Attributes 00:25:20.143 ========================== 00:25:20.143 Submission Queue Entry Size 00:25:20.143 Max: 64 00:25:20.143 Min: 64 00:25:20.143 Completion Queue Entry Size 00:25:20.143 Max: 16 00:25:20.143 Min: 16 00:25:20.143 Number of Namespaces: 32 00:25:20.143 Compare Command: Supported 00:25:20.143 Write Uncorrectable Command: Not Supported 00:25:20.143 Dataset Management Command: Supported 00:25:20.143 Write Zeroes Command: Supported 00:25:20.143 Set Features Save Field: Not Supported 00:25:20.143 Reservations: Supported 00:25:20.143 Timestamp: Not Supported 00:25:20.143 Copy: Supported 00:25:20.144 Volatile Write Cache: Present 00:25:20.144 Atomic Write Unit (Normal): 1 00:25:20.144 Atomic Write Unit (PFail): 1 00:25:20.144 Atomic Compare & Write Unit: 1 00:25:20.144 Fused Compare & Write: Supported 00:25:20.144 Scatter-Gather List 00:25:20.144 SGL Command Set: Supported 00:25:20.144 SGL Keyed: Supported 00:25:20.144 SGL Bit Bucket Descriptor: Not Supported 00:25:20.144 SGL Metadata Pointer: Not Supported 00:25:20.144 Oversized SGL: Not Supported 00:25:20.144 SGL Metadata Address: Not Supported 00:25:20.144 SGL Offset: Supported 00:25:20.144 Transport SGL Data Block: Not Supported 00:25:20.144 Replay Protected Memory Block: Not Supported 00:25:20.144 00:25:20.144 Firmware Slot Information 00:25:20.144 ========================= 00:25:20.144 Active slot: 1 00:25:20.144 Slot 1 Firmware Revision: 25.01 00:25:20.144 00:25:20.144 00:25:20.144 Commands Supported and Effects 00:25:20.144 ============================== 00:25:20.144 Admin Commands 00:25:20.144 -------------- 00:25:20.144 Get Log Page (02h): Supported 00:25:20.144 Identify (06h): Supported 00:25:20.144 Abort (08h): Supported 00:25:20.144 Set Features (09h): Supported 00:25:20.144 Get Features (0Ah): Supported 00:25:20.144 Asynchronous Event Request (0Ch): Supported 00:25:20.144 Keep Alive (18h): Supported 00:25:20.144 I/O Commands 00:25:20.144 ------------ 00:25:20.144 Flush (00h): Supported LBA-Change 00:25:20.144 Write (01h): Supported LBA-Change 00:25:20.144 Read (02h): Supported 00:25:20.144 Compare (05h): Supported 00:25:20.144 Write Zeroes (08h): Supported LBA-Change 00:25:20.144 Dataset Management (09h): Supported LBA-Change 00:25:20.144 Copy (19h): Supported LBA-Change 00:25:20.144 00:25:20.144 Error Log 00:25:20.144 ========= 00:25:20.144 00:25:20.144 Arbitration 00:25:20.144 =========== 00:25:20.144 Arbitration Burst: 1 00:25:20.144 00:25:20.144 Power Management 00:25:20.144 ================ 00:25:20.144 Number of Power States: 1 00:25:20.144 Current Power State: Power State #0 00:25:20.144 Power State #0: 00:25:20.144 Max Power: 0.00 W 00:25:20.144 Non-Operational State: Operational 00:25:20.144 Entry Latency: Not Reported 00:25:20.144 Exit Latency: Not Reported 00:25:20.144 Relative Read Throughput: 0 00:25:20.144 Relative Read Latency: 0 00:25:20.144 Relative Write Throughput: 0 00:25:20.144 Relative Write Latency: 0 00:25:20.144 Idle Power: Not Reported 00:25:20.144 Active Power: Not Reported 00:25:20.144 Non-Operational Permissive Mode: Not Supported 00:25:20.144 00:25:20.144 Health Information 00:25:20.144 ================== 00:25:20.144 Critical Warnings: 00:25:20.144 Available Spare Space: OK 00:25:20.144 Temperature: OK 00:25:20.144 Device Reliability: OK 00:25:20.144 Read Only: No 00:25:20.144 Volatile Memory Backup: OK 00:25:20.144 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:20.144 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:20.144 Available Spare: 0% 00:25:20.144 Available Spare Threshold: 0% 00:25:20.144 Life Percentage Used:[2024-11-25 12:59:59.939854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.939860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x72e550) 00:25:20.144 [2024-11-25 12:59:59.939871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.144 [2024-11-25 12:59:59.939882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790b80, cid 7, qid 0 00:25:20.144 [2024-11-25 12:59:59.940070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.144 [2024-11-25 12:59:59.940077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.144 [2024-11-25 12:59:59.940080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790b80) on tqpair=0x72e550 00:25:20.144 [2024-11-25 12:59:59.940113] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:20.144 [2024-11-25 12:59:59.940123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790100) on tqpair=0x72e550 00:25:20.144 [2024-11-25 12:59:59.940129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.144 [2024-11-25 12:59:59.940134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790280) on tqpair=0x72e550 00:25:20.144 [2024-11-25 12:59:59.940139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.144 [2024-11-25 12:59:59.940144] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790400) on tqpair=0x72e550 00:25:20.144 [2024-11-25 12:59:59.940148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.144 [2024-11-25 12:59:59.940153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790580) on tqpair=0x72e550 00:25:20.144 [2024-11-25 12:59:59.940158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.144 [2024-11-25 12:59:59.940166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x72e550) 00:25:20.144 [2024-11-25 12:59:59.940180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.144 [2024-11-25 12:59:59.940192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790580, cid 3, qid 0 00:25:20.144 [2024-11-25 12:59:59.940390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.144 [2024-11-25 12:59:59.940396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.144 [2024-11-25 12:59:59.940400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790580) on tqpair=0x72e550 00:25:20.144 [2024-11-25 12:59:59.940410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x72e550) 00:25:20.144 [2024-11-25 12:59:59.940424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.144 [2024-11-25 12:59:59.940437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790580, cid 3, qid 0 00:25:20.144 [2024-11-25 12:59:59.940604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.144 [2024-11-25 12:59:59.940611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.144 [2024-11-25 12:59:59.940614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790580) on tqpair=0x72e550 00:25:20.144 [2024-11-25 12:59:59.940622] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:20.144 [2024-11-25 12:59:59.940627] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:20.144 [2024-11-25 12:59:59.940636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x72e550) 00:25:20.144 [2024-11-25 12:59:59.940653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.144 [2024-11-25 12:59:59.940663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790580, cid 3, qid 0 00:25:20.144 [2024-11-25 12:59:59.940818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.144 [2024-11-25 12:59:59.940825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.144 [2024-11-25 12:59:59.940828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790580) on tqpair=0x72e550 00:25:20.144 [2024-11-25 12:59:59.940842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.940849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x72e550) 00:25:20.144 [2024-11-25 12:59:59.940856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.144 [2024-11-25 12:59:59.940870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790580, cid 3, qid 0 00:25:20.144 [2024-11-25 12:59:59.941045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.144 [2024-11-25 12:59:59.941051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.144 [2024-11-25 12:59:59.941055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.941059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790580) on tqpair=0x72e550 00:25:20.144 [2024-11-25 12:59:59.941068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.941072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.941076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x72e550) 00:25:20.144 [2024-11-25 12:59:59.941082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.144 [2024-11-25 12:59:59.941092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790580, cid 3, qid 0 00:25:20.144 [2024-11-25 12:59:59.941322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.144 [2024-11-25 12:59:59.941328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.144 [2024-11-25 12:59:59.941332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.941336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790580) on tqpair=0x72e550 00:25:20.144 [2024-11-25 12:59:59.941345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.144 [2024-11-25 12:59:59.941349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.145 [2024-11-25 12:59:59.941353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x72e550) 00:25:20.145 [2024-11-25 12:59:59.941359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.145 [2024-11-25 12:59:59.941369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790580, cid 3, qid 0 00:25:20.145 [2024-11-25 12:59:59.941592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.145 [2024-11-25 12:59:59.941599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.145 [2024-11-25 12:59:59.941602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.145 [2024-11-25 12:59:59.941606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790580) on tqpair=0x72e550 00:25:20.145 [2024-11-25 12:59:59.941616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.145 [2024-11-25 12:59:59.941620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.145 [2024-11-25 12:59:59.941623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x72e550) 00:25:20.145 [2024-11-25 12:59:59.941632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.145 [2024-11-25 12:59:59.941643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790580, cid 3, qid 0 00:25:20.145 [2024-11-25 12:59:59.945869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.145 [2024-11-25 12:59:59.945877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.145 [2024-11-25 12:59:59.945881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.145 [2024-11-25 12:59:59.945885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790580) on tqpair=0x72e550 00:25:20.145 [2024-11-25 12:59:59.945895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:20.145 [2024-11-25 12:59:59.945899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:20.145 [2024-11-25 12:59:59.945902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x72e550) 00:25:20.145 [2024-11-25 12:59:59.945909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.145 [2024-11-25 12:59:59.945921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x790580, cid 3, qid 0 00:25:20.145 [2024-11-25 12:59:59.945993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:20.145 [2024-11-25 12:59:59.945999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:20.145 [2024-11-25 12:59:59.946003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:20.145 [2024-11-25 12:59:59.946006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x790580) on tqpair=0x72e550 00:25:20.145 [2024-11-25 12:59:59.946014] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:25:20.145 0% 00:25:20.145 Data Units Read: 0 00:25:20.145 Data Units Written: 0 00:25:20.145 Host Read Commands: 0 00:25:20.145 Host Write Commands: 0 00:25:20.145 Controller Busy Time: 0 minutes 00:25:20.145 Power Cycles: 0 00:25:20.145 Power On Hours: 0 hours 00:25:20.145 Unsafe Shutdowns: 0 00:25:20.145 Unrecoverable Media Errors: 0 00:25:20.145 Lifetime Error Log Entries: 0 00:25:20.145 Warning Temperature Time: 0 minutes 00:25:20.145 Critical Temperature Time: 0 minutes 00:25:20.145 00:25:20.145 Number of Queues 00:25:20.145 ================ 00:25:20.145 Number of I/O Submission Queues: 127 00:25:20.145 Number of I/O Completion Queues: 127 00:25:20.145 00:25:20.145 Active Namespaces 00:25:20.145 ================= 00:25:20.145 Namespace ID:1 00:25:20.145 Error Recovery Timeout: Unlimited 00:25:20.145 Command Set Identifier: NVM (00h) 00:25:20.145 Deallocate: Supported 00:25:20.145 Deallocated/Unwritten Error: Not Supported 00:25:20.145 Deallocated Read Value: Unknown 00:25:20.145 Deallocate in Write Zeroes: Not Supported 00:25:20.145 Deallocated Guard Field: 0xFFFF 00:25:20.145 Flush: Supported 00:25:20.145 Reservation: Supported 00:25:20.145 Namespace Sharing Capabilities: Multiple Controllers 00:25:20.145 Size (in LBAs): 131072 (0GiB) 00:25:20.145 Capacity (in LBAs): 131072 (0GiB) 00:25:20.145 Utilization (in LBAs): 131072 (0GiB) 00:25:20.145 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:20.145 EUI64: ABCDEF0123456789 00:25:20.145 UUID: 0ed13057-b3a3-45d4-be84-ab3fed2253da 00:25:20.145 Thin Provisioning: Not Supported 00:25:20.145 Per-NS Atomic Units: Yes 00:25:20.145 Atomic Boundary Size (Normal): 0 00:25:20.145 Atomic Boundary Size (PFail): 0 00:25:20.145 Atomic Boundary Offset: 0 00:25:20.145 Maximum Single Source Range Length: 65535 00:25:20.145 Maximum Copy Length: 65535 00:25:20.145 Maximum Source Range Count: 1 00:25:20.145 NGUID/EUI64 Never Reused: No 00:25:20.145 Namespace Write Protected: No 00:25:20.145 Number of LBA Formats: 1 00:25:20.145 Current LBA Format: LBA Format #00 00:25:20.145 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:20.145 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:20.145 12:59:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:20.145 rmmod nvme_tcp 00:25:20.145 rmmod nvme_fabrics 00:25:20.145 rmmod nvme_keyring 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 734109 ']' 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 734109 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 734109 ']' 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 734109 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 734109 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 734109' 00:25:20.407 killing process with pid 734109 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 734109 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 734109 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.407 13:00:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.952 00:25:22.952 real 0m12.532s 00:25:22.952 user 0m8.625s 00:25:22.952 sys 0m6.870s 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.952 ************************************ 00:25:22.952 END TEST nvmf_identify 00:25:22.952 ************************************ 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.952 ************************************ 00:25:22.952 START TEST nvmf_perf 00:25:22.952 ************************************ 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:22.952 * Looking for test storage... 00:25:22.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:22.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.952 --rc genhtml_branch_coverage=1 00:25:22.952 --rc genhtml_function_coverage=1 00:25:22.952 --rc genhtml_legend=1 00:25:22.952 --rc geninfo_all_blocks=1 00:25:22.952 --rc geninfo_unexecuted_blocks=1 00:25:22.952 00:25:22.952 ' 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:22.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.952 --rc genhtml_branch_coverage=1 00:25:22.952 --rc genhtml_function_coverage=1 00:25:22.952 --rc genhtml_legend=1 00:25:22.952 --rc geninfo_all_blocks=1 00:25:22.952 --rc geninfo_unexecuted_blocks=1 00:25:22.952 00:25:22.952 ' 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:22.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.952 --rc genhtml_branch_coverage=1 00:25:22.952 --rc genhtml_function_coverage=1 00:25:22.952 --rc genhtml_legend=1 00:25:22.952 --rc geninfo_all_blocks=1 00:25:22.952 --rc geninfo_unexecuted_blocks=1 00:25:22.952 00:25:22.952 ' 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:22.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.952 --rc genhtml_branch_coverage=1 00:25:22.952 --rc genhtml_function_coverage=1 00:25:22.952 --rc genhtml_legend=1 00:25:22.952 --rc geninfo_all_blocks=1 00:25:22.952 --rc geninfo_unexecuted_blocks=1 00:25:22.952 00:25:22.952 ' 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.952 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.953 13:00:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.089 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:31.090 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:31.090 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:31.090 Found net devices under 0000:31:00.0: cvl_0_0 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:31.090 Found net devices under 0000:31:00.1: cvl_0_1 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:31.090 13:00:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:31.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:25:31.351 00:25:31.351 --- 10.0.0.2 ping statistics --- 00:25:31.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.351 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:25:31.351 00:25:31.351 --- 10.0.0.1 ping statistics --- 00:25:31.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.351 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=739731 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 739731 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 739731 ']' 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.351 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:31.351 [2024-11-25 13:00:11.176152] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:25:31.351 [2024-11-25 13:00:11.176201] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.612 [2024-11-25 13:00:11.267488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.612 [2024-11-25 13:00:11.303701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.612 [2024-11-25 13:00:11.303737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.612 [2024-11-25 13:00:11.303745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.612 [2024-11-25 13:00:11.303752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.612 [2024-11-25 13:00:11.303759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.612 [2024-11-25 13:00:11.305329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.612 [2024-11-25 13:00:11.305443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.612 [2024-11-25 13:00:11.305597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.612 [2024-11-25 13:00:11.305598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:32.182 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.182 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:32.182 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:32.182 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.182 13:00:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:32.182 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.182 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:32.182 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:32.751 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:32.751 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:33.010 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:33.010 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:33.010 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:33.010 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:33.010 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:33.010 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:33.010 13:00:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:33.270 [2024-11-25 13:00:13.052919] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.270 13:00:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:33.530 13:00:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:33.530 13:00:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:33.791 13:00:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:33.791 13:00:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:33.791 13:00:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.053 [2024-11-25 13:00:13.779620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.053 13:00:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:34.313 13:00:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:34.313 13:00:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:34.313 13:00:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:34.313 13:00:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:35.698 Initializing NVMe Controllers 00:25:35.698 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:35.698 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:35.698 Initialization complete. Launching workers. 00:25:35.698 ======================================================== 00:25:35.698 Latency(us) 00:25:35.698 Device Information : IOPS MiB/s Average min max 00:25:35.698 PCIE (0000:65:00.0) NSID 1 from core 0: 78930.32 308.32 404.94 13.45 5354.58 00:25:35.698 ======================================================== 00:25:35.698 Total : 78930.32 308.32 404.94 13.45 5354.58 00:25:35.698 00:25:35.698 13:00:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.083 Initializing NVMe Controllers 00:25:37.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:37.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:37.083 Initialization complete. Launching workers. 00:25:37.083 ======================================================== 00:25:37.083 Latency(us) 00:25:37.083 Device Information : IOPS MiB/s Average min max 00:25:37.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 55.80 0.22 17922.10 155.19 46182.72 00:25:37.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.79 0.22 18447.29 6304.84 47889.87 00:25:37.083 ======================================================== 00:25:37.083 Total : 112.59 0.44 18187.02 155.19 47889.87 00:25:37.083 00:25:37.083 13:00:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.026 Initializing NVMe Controllers 00:25:38.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:38.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:38.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:38.026 Initialization complete. Launching workers. 00:25:38.026 ======================================================== 00:25:38.026 Latency(us) 00:25:38.026 Device Information : IOPS MiB/s Average min max 00:25:38.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10284.00 40.17 3115.70 578.11 6524.40 00:25:38.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3815.00 14.90 8436.67 6217.11 15846.37 00:25:38.026 ======================================================== 00:25:38.026 Total : 14099.00 55.07 4555.49 578.11 15846.37 00:25:38.026 00:25:38.026 13:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:38.026 13:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:38.026 13:00:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:40.572 Initializing NVMe Controllers 00:25:40.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:40.572 Controller IO queue size 128, less than required. 00:25:40.572 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.572 Controller IO queue size 128, less than required. 00:25:40.572 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:40.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:40.572 Initialization complete. Launching workers. 00:25:40.572 ======================================================== 00:25:40.572 Latency(us) 00:25:40.572 Device Information : IOPS MiB/s Average min max 00:25:40.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1963.63 490.91 66130.08 40112.74 109606.42 00:25:40.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 581.74 145.44 228733.95 70194.22 363849.36 00:25:40.572 ======================================================== 00:25:40.572 Total : 2545.38 636.34 103293.03 40112.74 363849.36 00:25:40.572 00:25:40.572 13:00:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:40.833 No valid NVMe controllers or AIO or URING devices found 00:25:40.833 Initializing NVMe Controllers 00:25:40.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:40.833 Controller IO queue size 128, less than required. 00:25:40.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.833 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:40.833 Controller IO queue size 128, less than required. 00:25:40.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.833 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:40.833 WARNING: Some requested NVMe devices were skipped 00:25:40.833 13:00:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:43.381 Initializing NVMe Controllers 00:25:43.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.381 Controller IO queue size 128, less than required. 00:25:43.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.381 Controller IO queue size 128, less than required. 00:25:43.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:43.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:43.381 Initialization complete. Launching workers. 00:25:43.381 00:25:43.381 ==================== 00:25:43.381 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:43.381 TCP transport: 00:25:43.381 polls: 16139 00:25:43.381 idle_polls: 7010 00:25:43.381 sock_completions: 9129 00:25:43.381 nvme_completions: 7339 00:25:43.381 submitted_requests: 10966 00:25:43.381 queued_requests: 1 00:25:43.381 00:25:43.381 ==================== 00:25:43.381 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:43.381 TCP transport: 00:25:43.381 polls: 19540 00:25:43.381 idle_polls: 12199 00:25:43.381 sock_completions: 7341 00:25:43.381 nvme_completions: 5573 00:25:43.381 submitted_requests: 8394 00:25:43.381 queued_requests: 1 00:25:43.381 ======================================================== 00:25:43.381 Latency(us) 00:25:43.381 Device Information : IOPS MiB/s Average min max 00:25:43.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1830.47 457.62 70841.73 40466.21 110839.70 00:25:43.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1389.94 347.49 93541.94 39553.25 151937.34 00:25:43.381 ======================================================== 00:25:43.381 Total : 3220.42 805.10 80639.22 39553.25 151937.34 00:25:43.381 00:25:43.381 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:43.381 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:43.642 rmmod nvme_tcp 00:25:43.642 rmmod nvme_fabrics 00:25:43.642 rmmod nvme_keyring 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 739731 ']' 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 739731 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 739731 ']' 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 739731 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 739731 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 739731' 00:25:43.642 killing process with pid 739731 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 739731 00:25:43.642 13:00:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 739731 00:25:45.554 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:45.554 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:45.554 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:45.554 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:45.554 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:45.554 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:45.554 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:45.815 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:45.815 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:45.815 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.815 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.815 13:00:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.727 13:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:47.727 00:25:47.727 real 0m25.110s 00:25:47.727 user 0m58.359s 00:25:47.727 sys 0m9.153s 00:25:47.727 13:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.727 13:00:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:47.727 ************************************ 00:25:47.727 END TEST nvmf_perf 00:25:47.727 ************************************ 00:25:47.727 13:00:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:47.727 13:00:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.727 13:00:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.727 13:00:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.727 ************************************ 00:25:47.727 START TEST nvmf_fio_host 00:25:47.727 ************************************ 00:25:47.727 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:47.989 * Looking for test storage... 00:25:47.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:47.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.989 --rc genhtml_branch_coverage=1 00:25:47.989 --rc genhtml_function_coverage=1 00:25:47.989 --rc genhtml_legend=1 00:25:47.989 --rc geninfo_all_blocks=1 00:25:47.989 --rc geninfo_unexecuted_blocks=1 00:25:47.989 00:25:47.989 ' 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:47.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.989 --rc genhtml_branch_coverage=1 00:25:47.989 --rc genhtml_function_coverage=1 00:25:47.989 --rc genhtml_legend=1 00:25:47.989 --rc geninfo_all_blocks=1 00:25:47.989 --rc geninfo_unexecuted_blocks=1 00:25:47.989 00:25:47.989 ' 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:47.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.989 --rc genhtml_branch_coverage=1 00:25:47.989 --rc genhtml_function_coverage=1 00:25:47.989 --rc genhtml_legend=1 00:25:47.989 --rc geninfo_all_blocks=1 00:25:47.989 --rc geninfo_unexecuted_blocks=1 00:25:47.989 00:25:47.989 ' 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:47.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.989 --rc genhtml_branch_coverage=1 00:25:47.989 --rc genhtml_function_coverage=1 00:25:47.989 --rc genhtml_legend=1 00:25:47.989 --rc geninfo_all_blocks=1 00:25:47.989 --rc geninfo_unexecuted_blocks=1 00:25:47.989 00:25:47.989 ' 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:47.989 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.990 13:00:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.137 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.137 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:56.137 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:56.137 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:56.137 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:56.137 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:56.137 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:56.137 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:56.137 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:56.137 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:56.137 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:56.138 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:56.138 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:56.138 Found net devices under 0000:31:00.0: cvl_0_0 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:56.138 Found net devices under 0000:31:00.1: cvl_0_1 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.138 13:00:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:56.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:25:56.401 00:25:56.401 --- 10.0.0.2 ping statistics --- 00:25:56.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.401 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:56.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:25:56.401 00:25:56.401 --- 10.0.0.1 ping statistics --- 00:25:56.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.401 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=747314 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 747314 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 747314 ']' 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.401 13:00:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.661 [2024-11-25 13:00:36.328934] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:25:56.661 [2024-11-25 13:00:36.329022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.661 [2024-11-25 13:00:36.423417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:56.661 [2024-11-25 13:00:36.465466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.661 [2024-11-25 13:00:36.465501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.661 [2024-11-25 13:00:36.465509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.661 [2024-11-25 13:00:36.465516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.661 [2024-11-25 13:00:36.465522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.661 [2024-11-25 13:00:36.467164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.661 [2024-11-25 13:00:36.467283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.661 [2024-11-25 13:00:36.467441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.661 [2024-11-25 13:00:36.467442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:57.602 13:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.602 13:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:57.602 13:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:57.602 [2024-11-25 13:00:37.290473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.602 13:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:57.602 13:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:57.602 13:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.602 13:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:57.863 Malloc1 00:25:57.863 13:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:57.863 13:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:58.123 13:00:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.381 [2024-11-25 13:00:38.087627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.381 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:58.655 13:00:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:58.917 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:58.917 fio-3.35 00:25:58.917 Starting 1 thread 00:26:01.463 00:26:01.463 test: (groupid=0, jobs=1): err= 0: pid=747955: Mon Nov 25 13:00:41 2024 00:26:01.463 read: IOPS=13.9k, BW=54.1MiB/s (56.7MB/s)(108MiB/2005msec) 00:26:01.463 slat (usec): min=2, max=301, avg= 2.17, stdev= 2.52 00:26:01.463 clat (usec): min=3367, max=9056, avg=5091.53, stdev=383.12 00:26:01.463 lat (usec): min=3369, max=9058, avg=5093.70, stdev=383.36 00:26:01.463 clat percentiles (usec): 00:26:01.463 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:26:01.463 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:26:01.463 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:26:01.463 | 99.00th=[ 5932], 99.50th=[ 6915], 99.90th=[ 8094], 99.95th=[ 8717], 00:26:01.463 | 99.99th=[ 8848] 00:26:01.463 bw ( KiB/s): min=54576, max=55808, per=100.00%, avg=55430.00, stdev=575.83, samples=4 00:26:01.463 iops : min=13644, max=13952, avg=13857.50, stdev=143.96, samples=4 00:26:01.463 write: IOPS=13.9k, BW=54.1MiB/s (56.8MB/s)(109MiB/2005msec); 0 zone resets 00:26:01.463 slat (usec): min=2, max=266, avg= 2.24, stdev= 1.78 00:26:01.463 clat (usec): min=2689, max=8893, avg=4120.40, stdev=348.14 00:26:01.463 lat (usec): min=2691, max=8895, avg=4122.63, stdev=348.43 00:26:01.463 clat percentiles (usec): 00:26:01.463 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:26:01.463 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:26:01.463 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:26:01.463 | 99.00th=[ 4883], 99.50th=[ 6259], 99.90th=[ 7046], 99.95th=[ 7898], 00:26:01.463 | 99.99th=[ 8848] 00:26:01.463 bw ( KiB/s): min=54904, max=55800, per=100.00%, avg=55452.00, stdev=383.58, samples=4 00:26:01.463 iops : min=13726, max=13948, avg=13863.00, stdev=95.48, samples=4 00:26:01.463 lat (msec) : 4=16.87%, 10=83.13% 00:26:01.463 cpu : usr=74.35%, sys=24.20%, ctx=44, majf=0, minf=17 00:26:01.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:01.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:01.463 issued rwts: total=27775,27793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:01.463 00:26:01.463 Run status group 0 (all jobs): 00:26:01.463 READ: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=108MiB (114MB), run=2005-2005msec 00:26:01.463 WRITE: bw=54.1MiB/s (56.8MB/s), 54.1MiB/s-54.1MiB/s (56.8MB/s-56.8MB/s), io=109MiB (114MB), run=2005-2005msec 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:01.463 13:00:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:01.724 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:01.724 fio-3.35 00:26:01.724 Starting 1 thread 00:26:04.272 00:26:04.272 test: (groupid=0, jobs=1): err= 0: pid=748780: Mon Nov 25 13:00:44 2024 00:26:04.272 read: IOPS=9255, BW=145MiB/s (152MB/s)(290MiB/2002msec) 00:26:04.272 slat (usec): min=3, max=110, avg= 3.61, stdev= 1.72 00:26:04.272 clat (usec): min=577, max=15520, avg=8425.86, stdev=2017.62 00:26:04.272 lat (usec): min=584, max=15523, avg=8429.47, stdev=2017.77 00:26:04.272 clat percentiles (usec): 00:26:04.272 | 1.00th=[ 4293], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6521], 00:26:04.272 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 8979], 00:26:04.272 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11731], 00:26:04.272 | 99.00th=[13435], 99.50th=[13829], 99.90th=[14746], 99.95th=[15008], 00:26:04.272 | 99.99th=[15401] 00:26:04.272 bw ( KiB/s): min=68480, max=83334, per=49.02%, avg=72593.50, stdev=7171.45, samples=4 00:26:04.272 iops : min= 4280, max= 5208, avg=4537.00, stdev=448.03, samples=4 00:26:04.272 write: IOPS=5541, BW=86.6MiB/s (90.8MB/s)(148MiB/1707msec); 0 zone resets 00:26:04.272 slat (usec): min=39, max=427, avg=41.04, stdev= 8.43 00:26:04.272 clat (usec): min=2437, max=16075, avg=9437.85, stdev=1639.52 00:26:04.272 lat (usec): min=2477, max=16211, avg=9478.88, stdev=1641.58 00:26:04.272 clat percentiles (usec): 00:26:04.272 | 1.00th=[ 6128], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8094], 00:26:04.272 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:26:04.272 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12387], 00:26:04.272 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15664], 99.95th=[15795], 00:26:04.272 | 99.99th=[16057] 00:26:04.272 bw ( KiB/s): min=71360, max=86797, per=85.40%, avg=75715.25, stdev=7403.36, samples=4 00:26:04.272 iops : min= 4460, max= 5424, avg=4732.00, stdev=462.30, samples=4 00:26:04.272 lat (usec) : 750=0.01% 00:26:04.272 lat (msec) : 2=0.02%, 4=0.47%, 10=73.75%, 20=25.76% 00:26:04.272 cpu : usr=84.56%, sys=13.79%, ctx=16, majf=0, minf=47 00:26:04.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:04.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:04.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:04.272 issued rwts: total=18529,9459,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:04.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:04.272 00:26:04.272 Run status group 0 (all jobs): 00:26:04.272 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=290MiB (304MB), run=2002-2002msec 00:26:04.272 WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=148MiB (155MB), run=1707-1707msec 00:26:04.272 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:04.533 rmmod nvme_tcp 00:26:04.533 rmmod nvme_fabrics 00:26:04.533 rmmod nvme_keyring 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 747314 ']' 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 747314 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 747314 ']' 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 747314 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 747314 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 747314' 00:26:04.533 killing process with pid 747314 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 747314 00:26:04.533 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 747314 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.795 13:00:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.706 13:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:06.967 00:26:06.967 real 0m18.995s 00:26:06.967 user 1m5.041s 00:26:06.967 sys 0m8.315s 00:26:06.967 13:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.967 13:00:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.967 ************************************ 00:26:06.967 END TEST nvmf_fio_host 00:26:06.967 ************************************ 00:26:06.967 13:00:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:06.967 13:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:06.967 13:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.967 13:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.967 ************************************ 00:26:06.967 START TEST nvmf_failover 00:26:06.967 ************************************ 00:26:06.967 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:06.967 * Looking for test storage... 00:26:06.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:06.967 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:06.967 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:26:06.967 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:07.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.229 --rc genhtml_branch_coverage=1 00:26:07.229 --rc genhtml_function_coverage=1 00:26:07.229 --rc genhtml_legend=1 00:26:07.229 --rc geninfo_all_blocks=1 00:26:07.229 --rc geninfo_unexecuted_blocks=1 00:26:07.229 00:26:07.229 ' 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:07.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.229 --rc genhtml_branch_coverage=1 00:26:07.229 --rc genhtml_function_coverage=1 00:26:07.229 --rc genhtml_legend=1 00:26:07.229 --rc geninfo_all_blocks=1 00:26:07.229 --rc geninfo_unexecuted_blocks=1 00:26:07.229 00:26:07.229 ' 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:07.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.229 --rc genhtml_branch_coverage=1 00:26:07.229 --rc genhtml_function_coverage=1 00:26:07.229 --rc genhtml_legend=1 00:26:07.229 --rc geninfo_all_blocks=1 00:26:07.229 --rc geninfo_unexecuted_blocks=1 00:26:07.229 00:26:07.229 ' 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:07.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.229 --rc genhtml_branch_coverage=1 00:26:07.229 --rc genhtml_function_coverage=1 00:26:07.229 --rc genhtml_legend=1 00:26:07.229 --rc geninfo_all_blocks=1 00:26:07.229 --rc geninfo_unexecuted_blocks=1 00:26:07.229 00:26:07.229 ' 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.229 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:07.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:07.230 13:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:15.536 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:15.536 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:15.536 Found net devices under 0000:31:00.0: cvl_0_0 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:15.536 Found net devices under 0000:31:00.1: cvl_0_1 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.536 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:15.537 13:00:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:15.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:26:15.537 00:26:15.537 --- 10.0.0.2 ping statistics --- 00:26:15.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.537 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:26:15.537 00:26:15.537 --- 10.0.0.1 ping statistics --- 00:26:15.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.537 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=753812 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 753812 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 753812 ']' 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.537 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:15.537 [2024-11-25 13:00:55.193090] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:26:15.537 [2024-11-25 13:00:55.193158] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.537 [2024-11-25 13:00:55.302529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:15.537 [2024-11-25 13:00:55.355857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.537 [2024-11-25 13:00:55.355920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.537 [2024-11-25 13:00:55.355929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.537 [2024-11-25 13:00:55.355936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.537 [2024-11-25 13:00:55.355943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.537 [2024-11-25 13:00:55.358048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.537 [2024-11-25 13:00:55.358389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.537 [2024-11-25 13:00:55.358390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.109 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.109 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:16.109 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:16.109 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:16.109 13:00:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:16.369 13:00:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.369 13:00:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:16.369 [2024-11-25 13:00:56.167413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.369 13:00:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:16.629 Malloc0 00:26:16.629 13:00:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:16.889 13:00:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:16.889 13:00:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.149 [2024-11-25 13:00:56.904310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.149 13:00:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:17.410 [2024-11-25 13:00:57.080758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:17.410 13:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:17.410 [2024-11-25 13:00:57.257299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:17.410 13:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=754274 00:26:17.410 13:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:17.410 13:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:17.410 13:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 754274 /var/tmp/bdevperf.sock 00:26:17.410 13:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 754274 ']' 00:26:17.410 13:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:17.410 13:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.410 13:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:17.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:17.410 13:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.410 13:00:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:18.351 13:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.351 13:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:18.351 13:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:18.922 NVMe0n1 00:26:18.922 13:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:18.922 00:26:19.183 13:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=754528 00:26:19.183 13:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:19.183 13:00:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:20.125 13:00:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.125 [2024-11-25 13:00:59.993461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.125 [2024-11-25 13:00:59.993600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.126 [2024-11-25 13:00:59.993604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.126 [2024-11-25 13:00:59.993609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.126 [2024-11-25 13:00:59.993614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.126 [2024-11-25 13:00:59.993618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.126 [2024-11-25 13:00:59.993622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5410 is same with the state(6) to be set 00:26:20.126 13:01:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:23.427 13:01:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:23.427 00:26:23.427 13:01:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:23.688 [2024-11-25 13:01:03.463542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 [2024-11-25 13:01:03.463732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc61c0 is same with the state(6) to be set 00:26:23.688 13:01:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:26.987 13:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.987 [2024-11-25 13:01:06.655944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.987 13:01:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:27.928 13:01:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:28.190 [2024-11-25 13:01:07.845432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 [2024-11-25 13:01:07.845537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc7110 is same with the state(6) to be set 00:26:28.190 13:01:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 754528 00:26:34.786 { 00:26:34.786 "results": [ 00:26:34.786 { 00:26:34.786 "job": "NVMe0n1", 00:26:34.786 "core_mask": "0x1", 00:26:34.786 "workload": "verify", 00:26:34.786 "status": "finished", 00:26:34.786 "verify_range": { 00:26:34.786 "start": 0, 00:26:34.786 "length": 16384 00:26:34.786 }, 00:26:34.786 "queue_depth": 128, 00:26:34.786 "io_size": 4096, 00:26:34.786 "runtime": 15.003983, 00:26:34.786 "iops": 11173.433081069206, 00:26:34.786 "mibps": 43.64622297292659, 00:26:34.786 "io_failed": 5197, 00:26:34.786 "io_timeout": 0, 00:26:34.786 "avg_latency_us": 11083.530844215076, 00:26:34.786 "min_latency_us": 781.6533333333333, 00:26:34.786 "max_latency_us": 21517.653333333332 00:26:34.786 } 00:26:34.786 ], 00:26:34.786 "core_count": 1 00:26:34.786 } 00:26:34.786 13:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 754274 00:26:34.786 13:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 754274 ']' 00:26:34.786 13:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 754274 00:26:34.786 13:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:34.786 13:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.786 13:01:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 754274 00:26:34.786 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:34.786 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:34.786 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 754274' 00:26:34.786 killing process with pid 754274 00:26:34.786 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 754274 00:26:34.786 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 754274 00:26:34.786 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:34.786 [2024-11-25 13:00:57.338087] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:26:34.786 [2024-11-25 13:00:57.338147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid754274 ] 00:26:34.786 [2024-11-25 13:00:57.416743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.786 [2024-11-25 13:00:57.452625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.786 Running I/O for 15 seconds... 00:26:34.786 11163.00 IOPS, 43.61 MiB/s [2024-11-25T12:01:14.689Z] [2024-11-25 13:00:59.996432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.786 [2024-11-25 13:00:59.996467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.786 [2024-11-25 13:00:59.996825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.786 [2024-11-25 13:00:59.996833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.996842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.996849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.996858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.996873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.996883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.996890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.996900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.996907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.996916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.996924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.996933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.996940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.996949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.996956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.996965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.787 [2024-11-25 13:00:59.996972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.996981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.787 [2024-11-25 13:00:59.996989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.996998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.787 [2024-11-25 13:00:59.997005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.787 [2024-11-25 13:00:59.997021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.787 [2024-11-25 13:00:59.997039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.787 [2024-11-25 13:00:59.997055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.787 [2024-11-25 13:00:59.997072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.787 [2024-11-25 13:00:59.997476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.787 [2024-11-25 13:00:59.997483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.788 [2024-11-25 13:00:59.997504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.788 [2024-11-25 13:00:59.997521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.788 [2024-11-25 13:00:59.997537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.788 [2024-11-25 13:00:59.997553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.788 [2024-11-25 13:00:59.997570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.788 [2024-11-25 13:00:59.997586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.788 [2024-11-25 13:00:59.997603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97232 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97240 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97248 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97256 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97264 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97272 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97280 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97288 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.997974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.997980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.997986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.997993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.998000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.998006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.998012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.998020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.998029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.998034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.998040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.998047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.998055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.998061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.998067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.998074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.998082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.998087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.998093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.998101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.998108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.998114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.788 [2024-11-25 13:00:59.998120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97376 len:8 PRP1 0x0 PRP2 0x0 00:26:34.788 [2024-11-25 13:00:59.998127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.788 [2024-11-25 13:00:59.998134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.788 [2024-11-25 13:00:59.998141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97384 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97392 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97400 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97416 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97432 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97440 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97448 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97456 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97464 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97472 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97480 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97488 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97496 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97504 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97512 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97520 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97528 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97536 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97544 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97552 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97560 len:8 PRP1 0x0 PRP2 0x0 00:26:34.789 [2024-11-25 13:00:59.998736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.789 [2024-11-25 13:00:59.998744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.789 [2024-11-25 13:00:59.998750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.789 [2024-11-25 13:00:59.998757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97568 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:00:59.998765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:00:59.998772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:00:59.998778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:00:59.998784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97576 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:00:59.998791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:00:59.998799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:00:59.998804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97584 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97592 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96640 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96648 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96656 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96664 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96672 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96680 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96688 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96696 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96704 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96712 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96720 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96728 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96736 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.790 [2024-11-25 13:01:00.010734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.790 [2024-11-25 13:01:00.010741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96744 len:8 PRP1 0x0 PRP2 0x0 00:26:34.790 [2024-11-25 13:01:00.010748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010791] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:34.790 [2024-11-25 13:01:00.010820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.790 [2024-11-25 13:01:00.010829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.790 [2024-11-25 13:01:00.010847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.790 [2024-11-25 13:01:00.010871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.790 [2024-11-25 13:01:00.010887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.790 [2024-11-25 13:01:00.010895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:34.791 [2024-11-25 13:01:00.010942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839d80 (9): Bad file descriptor 00:26:34.791 [2024-11-25 13:01:00.014482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:34.791 [2024-11-25 13:01:00.081544] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:34.791 11032.00 IOPS, 43.09 MiB/s [2024-11-25T12:01:14.694Z] 11065.33 IOPS, 43.22 MiB/s [2024-11-25T12:01:14.694Z] 11283.25 IOPS, 44.08 MiB/s [2024-11-25T12:01:14.694Z] [2024-11-25 13:01:03.465547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.465985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.465994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.791 [2024-11-25 13:01:03.466216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.791 [2024-11-25 13:01:03.466223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.792 [2024-11-25 13:01:03.466704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.792 [2024-11-25 13:01:03.466721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.792 [2024-11-25 13:01:03.466820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.792 [2024-11-25 13:01:03.466837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.792 [2024-11-25 13:01:03.466846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.792 [2024-11-25 13:01:03.466853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.466865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.466872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.466882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.466889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.466899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.466908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.466917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.466925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.466935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.466942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.466951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.466959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.466968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.466975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.466984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.466991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.467008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.467025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.467041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.467058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.467074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.467090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.793 [2024-11-25 13:01:03.467107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.793 [2024-11-25 13:01:03.467391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.793 [2024-11-25 13:01:03.467420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38176 len:8 PRP1 0x0 PRP2 0x0 00:26:34.793 [2024-11-25 13:01:03.467428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.793 [2024-11-25 13:01:03.467444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.793 [2024-11-25 13:01:03.467450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38184 len:8 PRP1 0x0 PRP2 0x0 00:26:34.793 [2024-11-25 13:01:03.467457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.793 [2024-11-25 13:01:03.467471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.793 [2024-11-25 13:01:03.467477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38192 len:8 PRP1 0x0 PRP2 0x0 00:26:34.793 [2024-11-25 13:01:03.467484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.793 [2024-11-25 13:01:03.467497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.793 [2024-11-25 13:01:03.467503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38200 len:8 PRP1 0x0 PRP2 0x0 00:26:34.793 [2024-11-25 13:01:03.467510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.793 [2024-11-25 13:01:03.467518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38208 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38216 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38224 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38232 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38240 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38248 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38256 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38264 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38272 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38280 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38288 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38296 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38304 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38312 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38320 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.794 [2024-11-25 13:01:03.467934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.794 [2024-11-25 13:01:03.467940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38328 len:8 PRP1 0x0 PRP2 0x0 00:26:34.794 [2024-11-25 13:01:03.467947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.467984] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:34.794 [2024-11-25 13:01:03.468004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.794 [2024-11-25 13:01:03.468015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.468023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.794 [2024-11-25 13:01:03.468030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.468039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.794 [2024-11-25 13:01:03.468046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.468054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.794 [2024-11-25 13:01:03.468061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:03.468069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:34.794 [2024-11-25 13:01:03.471607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:34.794 [2024-11-25 13:01:03.471633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839d80 (9): Bad file descriptor 00:26:34.794 [2024-11-25 13:01:03.502438] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:34.794 11252.20 IOPS, 43.95 MiB/s [2024-11-25T12:01:14.697Z] 11253.00 IOPS, 43.96 MiB/s [2024-11-25T12:01:14.697Z] 11215.86 IOPS, 43.81 MiB/s [2024-11-25T12:01:14.697Z] 11229.12 IOPS, 43.86 MiB/s [2024-11-25T12:01:14.697Z] [2024-11-25 13:01:07.848518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.794 [2024-11-25 13:01:07.848557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:07.848575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.794 [2024-11-25 13:01:07.848583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:07.848593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.794 [2024-11-25 13:01:07.848601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:07.848611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.794 [2024-11-25 13:01:07.848618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:07.848628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.794 [2024-11-25 13:01:07.848635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.794 [2024-11-25 13:01:07.848644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.795 [2024-11-25 13:01:07.848651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.795 [2024-11-25 13:01:07.848668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.795 [2024-11-25 13:01:07.848692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.795 [2024-11-25 13:01:07.848709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.795 [2024-11-25 13:01:07.848726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.795 [2024-11-25 13:01:07.848742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.848987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.848996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.795 [2024-11-25 13:01:07.849228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.795 [2024-11-25 13:01:07.849236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.796 [2024-11-25 13:01:07.849649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.796 [2024-11-25 13:01:07.849680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46960 len:8 PRP1 0x0 PRP2 0x0 00:26:34.796 [2024-11-25 13:01:07.849687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.796 [2024-11-25 13:01:07.849703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.796 [2024-11-25 13:01:07.849709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46968 len:8 PRP1 0x0 PRP2 0x0 00:26:34.796 [2024-11-25 13:01:07.849717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.796 [2024-11-25 13:01:07.849730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.796 [2024-11-25 13:01:07.849736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46976 len:8 PRP1 0x0 PRP2 0x0 00:26:34.796 [2024-11-25 13:01:07.849743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.796 [2024-11-25 13:01:07.849756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.796 [2024-11-25 13:01:07.849764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46984 len:8 PRP1 0x0 PRP2 0x0 00:26:34.796 [2024-11-25 13:01:07.849772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.796 [2024-11-25 13:01:07.849785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.796 [2024-11-25 13:01:07.849791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46992 len:8 PRP1 0x0 PRP2 0x0 00:26:34.796 [2024-11-25 13:01:07.849798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.796 [2024-11-25 13:01:07.849811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.796 [2024-11-25 13:01:07.849818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47000 len:8 PRP1 0x0 PRP2 0x0 00:26:34.796 [2024-11-25 13:01:07.849825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.796 [2024-11-25 13:01:07.849838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.796 [2024-11-25 13:01:07.849844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47008 len:8 PRP1 0x0 PRP2 0x0 00:26:34.796 [2024-11-25 13:01:07.849851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.796 [2024-11-25 13:01:07.849868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.796 [2024-11-25 13:01:07.849874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47016 len:8 PRP1 0x0 PRP2 0x0 00:26:34.796 [2024-11-25 13:01:07.849881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.796 [2024-11-25 13:01:07.849888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.796 [2024-11-25 13:01:07.849894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.796 [2024-11-25 13:01:07.849900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47024 len:8 PRP1 0x0 PRP2 0x0 00:26:34.796 [2024-11-25 13:01:07.849908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.849915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.849921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.849927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47032 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.849934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.849941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.849947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.849953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47040 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.849960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.849968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.849975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.849982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47048 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.849989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.849996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47056 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47064 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47072 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47080 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47088 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47096 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47104 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47112 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47120 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47128 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47136 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47144 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47152 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47160 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47168 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47176 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47184 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47192 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47200 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.797 [2024-11-25 13:01:07.850508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.797 [2024-11-25 13:01:07.850514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47208 len:8 PRP1 0x0 PRP2 0x0 00:26:34.797 [2024-11-25 13:01:07.850521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.797 [2024-11-25 13:01:07.850529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47216 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47224 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47232 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47240 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47248 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47256 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47264 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47272 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47280 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47288 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47296 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47304 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47312 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.850871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.850879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.850885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.850891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47320 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.861633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.861665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.861674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.861681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47328 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.861689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.861697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.861703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.861709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47336 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.861717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.861724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.861731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.861737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47344 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.861744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.861752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.861757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.861763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47352 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.861776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.861783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.861789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.861795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47360 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.861802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.861810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.861815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.861821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47368 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.861828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.861836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.861842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.861848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47376 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.861855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.861872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.861878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.861884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47384 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.861891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.861899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.861904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.798 [2024-11-25 13:01:07.861910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47392 len:8 PRP1 0x0 PRP2 0x0 00:26:34.798 [2024-11-25 13:01:07.861918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.798 [2024-11-25 13:01:07.861925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.798 [2024-11-25 13:01:07.861931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.799 [2024-11-25 13:01:07.861937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47400 len:8 PRP1 0x0 PRP2 0x0 00:26:34.799 [2024-11-25 13:01:07.861944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.861951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.799 [2024-11-25 13:01:07.861957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.799 [2024-11-25 13:01:07.861963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47408 len:8 PRP1 0x0 PRP2 0x0 00:26:34.799 [2024-11-25 13:01:07.861971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.861979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.799 [2024-11-25 13:01:07.861984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.799 [2024-11-25 13:01:07.861992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47416 len:8 PRP1 0x0 PRP2 0x0 00:26:34.799 [2024-11-25 13:01:07.861999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.862007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.799 [2024-11-25 13:01:07.862013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.799 [2024-11-25 13:01:07.862019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47424 len:8 PRP1 0x0 PRP2 0x0 00:26:34.799 [2024-11-25 13:01:07.862026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.862033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.799 [2024-11-25 13:01:07.862038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.799 [2024-11-25 13:01:07.862044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47432 len:8 PRP1 0x0 PRP2 0x0 00:26:34.799 [2024-11-25 13:01:07.862052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.862059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.799 [2024-11-25 13:01:07.862065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.799 [2024-11-25 13:01:07.862071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47440 len:8 PRP1 0x0 PRP2 0x0 00:26:34.799 [2024-11-25 13:01:07.862079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.862087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.799 [2024-11-25 13:01:07.862092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.799 [2024-11-25 13:01:07.862098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47448 len:8 PRP1 0x0 PRP2 0x0 00:26:34.799 [2024-11-25 13:01:07.862105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.862113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.799 [2024-11-25 13:01:07.862118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.799 [2024-11-25 13:01:07.862124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47456 len:8 PRP1 0x0 PRP2 0x0 00:26:34.799 [2024-11-25 13:01:07.862131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.862176] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:34.799 [2024-11-25 13:01:07.862206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.799 [2024-11-25 13:01:07.862215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.862224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.799 [2024-11-25 13:01:07.862232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.862240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.799 [2024-11-25 13:01:07.862247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.862260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.799 [2024-11-25 13:01:07.862268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.799 [2024-11-25 13:01:07.862275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:34.799 [2024-11-25 13:01:07.862305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1839d80 (9): Bad file descriptor 00:26:34.799 [2024-11-25 13:01:07.865854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:34.799 [2024-11-25 13:01:07.891807] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:34.799 11189.67 IOPS, 43.71 MiB/s [2024-11-25T12:01:14.702Z] 11193.40 IOPS, 43.72 MiB/s [2024-11-25T12:01:14.702Z] 11205.73 IOPS, 43.77 MiB/s [2024-11-25T12:01:14.702Z] 11193.25 IOPS, 43.72 MiB/s [2024-11-25T12:01:14.702Z] 11182.08 IOPS, 43.68 MiB/s [2024-11-25T12:01:14.702Z] 11179.50 IOPS, 43.67 MiB/s 00:26:34.799 Latency(us) 00:26:34.799 [2024-11-25T12:01:14.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.799 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:34.799 Verification LBA range: start 0x0 length 0x4000 00:26:34.799 NVMe0n1 : 15.00 11173.43 43.65 346.37 0.00 11083.53 781.65 21517.65 00:26:34.799 [2024-11-25T12:01:14.702Z] =================================================================================================================== 00:26:34.799 [2024-11-25T12:01:14.702Z] Total : 11173.43 43.65 346.37 0.00 11083.53 781.65 21517.65 00:26:34.799 Received shutdown signal, test time was about 15.000000 seconds 00:26:34.799 00:26:34.799 Latency(us) 00:26:34.799 [2024-11-25T12:01:14.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.799 [2024-11-25T12:01:14.702Z] =================================================================================================================== 00:26:34.799 [2024-11-25T12:01:14.702Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=757524 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 757524 /var/tmp/bdevperf.sock 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 757524 ']' 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:34.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.799 13:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:35.371 13:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.371 13:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:35.371 13:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:35.371 [2024-11-25 13:01:15.174066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:35.371 13:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:35.630 [2024-11-25 13:01:15.362519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:35.630 13:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:35.891 NVMe0n1 00:26:35.891 13:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:36.151 00:26:36.151 13:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:36.411 00:26:36.411 13:01:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:36.411 13:01:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:36.671 13:01:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:36.933 13:01:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:40.232 13:01:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:40.232 13:01:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:40.232 13:01:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=758539 00:26:40.232 13:01:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:40.232 13:01:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 758539 00:26:41.173 { 00:26:41.173 "results": [ 00:26:41.173 { 00:26:41.173 "job": "NVMe0n1", 00:26:41.173 "core_mask": "0x1", 00:26:41.173 "workload": "verify", 00:26:41.173 "status": "finished", 00:26:41.173 "verify_range": { 00:26:41.173 "start": 0, 00:26:41.173 "length": 16384 00:26:41.173 }, 00:26:41.173 "queue_depth": 128, 00:26:41.173 "io_size": 4096, 00:26:41.173 "runtime": 1.005855, 00:26:41.173 "iops": 11120.887205412311, 00:26:41.173 "mibps": 43.44096564614184, 00:26:41.173 "io_failed": 0, 00:26:41.173 "io_timeout": 0, 00:26:41.173 "avg_latency_us": 11441.703882233744, 00:26:41.173 "min_latency_us": 1870.5066666666667, 00:26:41.173 "max_latency_us": 11086.506666666666 00:26:41.173 } 00:26:41.173 ], 00:26:41.173 "core_count": 1 00:26:41.173 } 00:26:41.173 13:01:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:41.173 [2024-11-25 13:01:14.239053] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:26:41.173 [2024-11-25 13:01:14.239127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757524 ] 00:26:41.173 [2024-11-25 13:01:14.317743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.173 [2024-11-25 13:01:14.353235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.173 [2024-11-25 13:01:16.576517] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:41.173 [2024-11-25 13:01:16.576561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.173 [2024-11-25 13:01:16.576573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.173 [2024-11-25 13:01:16.576582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.173 [2024-11-25 13:01:16.576590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.173 [2024-11-25 13:01:16.576598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.173 [2024-11-25 13:01:16.576605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.173 [2024-11-25 13:01:16.576613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.173 [2024-11-25 13:01:16.576620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.173 [2024-11-25 13:01:16.576628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:41.173 [2024-11-25 13:01:16.576654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:41.173 [2024-11-25 13:01:16.576668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd32d80 (9): Bad file descriptor 00:26:41.173 [2024-11-25 13:01:16.587441] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:41.173 Running I/O for 1 seconds... 00:26:41.173 11058.00 IOPS, 43.20 MiB/s 00:26:41.173 Latency(us) 00:26:41.173 [2024-11-25T12:01:21.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.173 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:41.173 Verification LBA range: start 0x0 length 0x4000 00:26:41.173 NVMe0n1 : 1.01 11120.89 43.44 0.00 0.00 11441.70 1870.51 11086.51 00:26:41.173 [2024-11-25T12:01:21.076Z] =================================================================================================================== 00:26:41.173 [2024-11-25T12:01:21.076Z] Total : 11120.89 43.44 0.00 0.00 11441.70 1870.51 11086.51 00:26:41.173 13:01:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:41.173 13:01:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:41.433 13:01:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:41.433 13:01:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:41.433 13:01:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:41.694 13:01:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:41.954 13:01:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 757524 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 757524 ']' 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 757524 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 757524 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 757524' 00:26:45.250 killing process with pid 757524 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 757524 00:26:45.250 13:01:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 757524 00:26:45.250 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:45.250 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:45.512 rmmod nvme_tcp 00:26:45.512 rmmod nvme_fabrics 00:26:45.512 rmmod nvme_keyring 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 753812 ']' 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 753812 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 753812 ']' 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 753812 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 753812 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 753812' 00:26:45.512 killing process with pid 753812 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 753812 00:26:45.512 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 753812 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.776 13:01:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.689 13:01:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:47.689 00:26:47.689 real 0m40.839s 00:26:47.689 user 2m3.685s 00:26:47.689 sys 0m8.912s 00:26:47.689 13:01:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:47.689 13:01:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:47.689 ************************************ 00:26:47.689 END TEST nvmf_failover 00:26:47.689 ************************************ 00:26:47.689 13:01:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:47.689 13:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:47.689 13:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:47.689 13:01:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.951 ************************************ 00:26:47.951 START TEST nvmf_host_discovery 00:26:47.951 ************************************ 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:47.951 * Looking for test storage... 00:26:47.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:47.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.951 --rc genhtml_branch_coverage=1 00:26:47.951 --rc genhtml_function_coverage=1 00:26:47.951 --rc genhtml_legend=1 00:26:47.951 --rc geninfo_all_blocks=1 00:26:47.951 --rc geninfo_unexecuted_blocks=1 00:26:47.951 00:26:47.951 ' 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:47.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.951 --rc genhtml_branch_coverage=1 00:26:47.951 --rc genhtml_function_coverage=1 00:26:47.951 --rc genhtml_legend=1 00:26:47.951 --rc geninfo_all_blocks=1 00:26:47.951 --rc geninfo_unexecuted_blocks=1 00:26:47.951 00:26:47.951 ' 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:47.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.951 --rc genhtml_branch_coverage=1 00:26:47.951 --rc genhtml_function_coverage=1 00:26:47.951 --rc genhtml_legend=1 00:26:47.951 --rc geninfo_all_blocks=1 00:26:47.951 --rc geninfo_unexecuted_blocks=1 00:26:47.951 00:26:47.951 ' 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:47.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.951 --rc genhtml_branch_coverage=1 00:26:47.951 --rc genhtml_function_coverage=1 00:26:47.951 --rc genhtml_legend=1 00:26:47.951 --rc geninfo_all_blocks=1 00:26:47.951 --rc geninfo_unexecuted_blocks=1 00:26:47.951 00:26:47.951 ' 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.951 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:47.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.952 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.213 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:48.213 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:48.213 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:48.213 13:01:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:56.354 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:56.354 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:56.354 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:56.355 Found net devices under 0000:31:00.0: cvl_0_0 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:56.355 Found net devices under 0000:31:00.1: cvl_0_1 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.355 13:01:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.355 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.355 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.355 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:56.355 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.355 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.355 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.355 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:56.355 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:56.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:26:56.355 00:26:56.355 --- 10.0.0.2 ping statistics --- 00:26:56.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.355 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:26:56.355 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:26:56.616 00:26:56.616 --- 10.0.0.1 ping statistics --- 00:26:56.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.616 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=764415 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 764415 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 764415 ']' 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.616 13:01:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.616 [2024-11-25 13:01:36.369524] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:26:56.616 [2024-11-25 13:01:36.369576] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.616 [2024-11-25 13:01:36.470740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.616 [2024-11-25 13:01:36.507478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.616 [2024-11-25 13:01:36.507513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.616 [2024-11-25 13:01:36.507521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.616 [2024-11-25 13:01:36.507528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.616 [2024-11-25 13:01:36.507534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.616 [2024-11-25 13:01:36.508156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.558 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.558 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:57.558 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:57.558 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:57.558 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.559 [2024-11-25 13:01:37.218187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.559 [2024-11-25 13:01:37.230468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.559 null0 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.559 null1 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=764600 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 764600 /tmp/host.sock 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 764600 ']' 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:57.559 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.559 13:01:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.559 [2024-11-25 13:01:37.329430] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:26:57.559 [2024-11-25 13:01:37.329495] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid764600 ] 00:26:57.559 [2024-11-25 13:01:37.412616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.559 [2024-11-25 13:01:37.454330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:58.501 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:58.502 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.762 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:58.762 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:58.762 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.762 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:58.762 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.762 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.763 [2024-11-25 13:01:38.465636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.763 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.023 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:59.023 13:01:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:59.283 [2024-11-25 13:01:39.146096] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:59.283 [2024-11-25 13:01:39.146124] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:59.283 [2024-11-25 13:01:39.146138] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:59.543 [2024-11-25 13:01:39.234403] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:59.804 [2024-11-25 13:01:39.457677] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:59.804 [2024-11-25 13:01:39.458652] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1995740:1 started. 00:26:59.804 [2024-11-25 13:01:39.460269] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:59.804 [2024-11-25 13:01:39.460286] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:59.804 [2024-11-25 13:01:39.465974] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1995740 was disconnected and freed. delete nvme_qpair. 00:26:59.804 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:59.804 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:59.804 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:59.804 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:59.804 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:59.804 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.804 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:59.804 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.804 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:59.804 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:00.066 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:00.067 [2024-11-25 13:01:39.902312] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1995ae0:1 started. 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:00.067 [2024-11-25 13:01:39.907076] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1995ae0 was disconnected and freed. delete nvme_qpair. 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.067 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.328 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.328 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:00.328 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:00.328 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:00.328 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:00.328 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:00.328 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.328 13:01:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.328 [2024-11-25 13:01:40.005973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:00.329 [2024-11-25 13:01:40.006532] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:00.329 [2024-11-25 13:01:40.006555] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:00.329 [2024-11-25 13:01:40.094820] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:00.329 13:01:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:00.589 [2024-11-25 13:01:40.408701] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:27:00.589 [2024-11-25 13:01:40.408748] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:00.589 [2024-11-25 13:01:40.408758] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:00.589 [2024-11-25 13:01:40.408763] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.535 [2024-11-25 13:01:41.277797] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:01.535 [2024-11-25 13:01:41.277819] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:01.535 [2024-11-25 13:01:41.282152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.535 [2024-11-25 13:01:41.282173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.535 [2024-11-25 13:01:41.282183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.535 [2024-11-25 13:01:41.282190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.535 [2024-11-25 13:01:41.282199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.535 [2024-11-25 13:01:41.282207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.535 [2024-11-25 13:01:41.282221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.535 [2024-11-25 13:01:41.282228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.535 [2024-11-25 13:01:41.282236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.535 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:01.535 [2024-11-25 13:01:41.292164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.535 [2024-11-25 13:01:41.302202] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:01.535 [2024-11-25 13:01:41.302216] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:01.535 [2024-11-25 13:01:41.302221] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:01.535 [2024-11-25 13:01:41.302226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.535 [2024-11-25 13:01:41.302245] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:01.535 [2024-11-25 13:01:41.302492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.535 [2024-11-25 13:01:41.302509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965d90 with addr=10.0.0.2, port=4420 00:27:01.535 [2024-11-25 13:01:41.302518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.535 [2024-11-25 13:01:41.302530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.535 [2024-11-25 13:01:41.302542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.535 [2024-11-25 13:01:41.302549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.535 [2024-11-25 13:01:41.302558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.535 [2024-11-25 13:01:41.302565] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:01.535 [2024-11-25 13:01:41.302571] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:01.535 [2024-11-25 13:01:41.302576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.536 [2024-11-25 13:01:41.312276] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:01.536 [2024-11-25 13:01:41.312288] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:01.536 [2024-11-25 13:01:41.312293] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:01.536 [2024-11-25 13:01:41.312297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.536 [2024-11-25 13:01:41.312312] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:01.536 [2024-11-25 13:01:41.312615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.536 [2024-11-25 13:01:41.312628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965d90 with addr=10.0.0.2, port=4420 00:27:01.536 [2024-11-25 13:01:41.312636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.536 [2024-11-25 13:01:41.312647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.536 [2024-11-25 13:01:41.312658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.536 [2024-11-25 13:01:41.312665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.536 [2024-11-25 13:01:41.312672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.536 [2024-11-25 13:01:41.312678] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:01.536 [2024-11-25 13:01:41.312683] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:01.536 [2024-11-25 13:01:41.312688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.536 [2024-11-25 13:01:41.322344] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:01.536 [2024-11-25 13:01:41.322357] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:01.536 [2024-11-25 13:01:41.322362] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:01.536 [2024-11-25 13:01:41.322367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.536 [2024-11-25 13:01:41.322383] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:01.536 [2024-11-25 13:01:41.322683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.536 [2024-11-25 13:01:41.322697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965d90 with addr=10.0.0.2, port=4420 00:27:01.536 [2024-11-25 13:01:41.322705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.536 [2024-11-25 13:01:41.322716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.536 [2024-11-25 13:01:41.322727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.536 [2024-11-25 13:01:41.322734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.536 [2024-11-25 13:01:41.322742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.536 [2024-11-25 13:01:41.322748] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:01.536 [2024-11-25 13:01:41.322757] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:01.536 [2024-11-25 13:01:41.322762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.536 [2024-11-25 13:01:41.332415] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:01.536 [2024-11-25 13:01:41.332428] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:01.536 [2024-11-25 13:01:41.332433] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:01.536 [2024-11-25 13:01:41.332438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.536 [2024-11-25 13:01:41.332453] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:01.536 [2024-11-25 13:01:41.332790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.536 [2024-11-25 13:01:41.332803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965d90 with addr=10.0.0.2, port=4420 00:27:01.536 [2024-11-25 13:01:41.332810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.536 [2024-11-25 13:01:41.332821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.536 [2024-11-25 13:01:41.332832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.536 [2024-11-25 13:01:41.332839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.536 [2024-11-25 13:01:41.332846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.536 [2024-11-25 13:01:41.332852] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:01.536 [2024-11-25 13:01:41.332857] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:01.536 [2024-11-25 13:01:41.332865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.536 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:01.536 [2024-11-25 13:01:41.342484] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:01.536 [2024-11-25 13:01:41.342498] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:01.536 [2024-11-25 13:01:41.342503] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:01.536 [2024-11-25 13:01:41.342508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.536 [2024-11-25 13:01:41.342523] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:01.536 [2024-11-25 13:01:41.342800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.536 [2024-11-25 13:01:41.342813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965d90 with addr=10.0.0.2, port=4420 00:27:01.536 [2024-11-25 13:01:41.342821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.536 [2024-11-25 13:01:41.342833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.536 [2024-11-25 13:01:41.342843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.536 [2024-11-25 13:01:41.342850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.536 [2024-11-25 13:01:41.342858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.536 [2024-11-25 13:01:41.342869] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:01.536 [2024-11-25 13:01:41.342874] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:01.536 [2024-11-25 13:01:41.342879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.536 [2024-11-25 13:01:41.352554] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:01.536 [2024-11-25 13:01:41.352569] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:01.536 [2024-11-25 13:01:41.352574] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:01.536 [2024-11-25 13:01:41.352578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.536 [2024-11-25 13:01:41.352594] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:01.536 [2024-11-25 13:01:41.352784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.536 [2024-11-25 13:01:41.352796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965d90 with addr=10.0.0.2, port=4420 00:27:01.536 [2024-11-25 13:01:41.352804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.536 [2024-11-25 13:01:41.352815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.536 [2024-11-25 13:01:41.352826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.536 [2024-11-25 13:01:41.352834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.536 [2024-11-25 13:01:41.352841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.536 [2024-11-25 13:01:41.352847] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:01.536 [2024-11-25 13:01:41.352853] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:01.536 [2024-11-25 13:01:41.352857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.536 [2024-11-25 13:01:41.362624] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:01.537 [2024-11-25 13:01:41.362643] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:01.537 [2024-11-25 13:01:41.362648] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:01.537 [2024-11-25 13:01:41.362653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.537 [2024-11-25 13:01:41.362667] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:01.537 [2024-11-25 13:01:41.363098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.537 [2024-11-25 13:01:41.363137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965d90 with addr=10.0.0.2, port=4420 00:27:01.537 [2024-11-25 13:01:41.363149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.537 [2024-11-25 13:01:41.363167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.537 [2024-11-25 13:01:41.363179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.537 [2024-11-25 13:01:41.363186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.537 [2024-11-25 13:01:41.363194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.537 [2024-11-25 13:01:41.363202] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:01.537 [2024-11-25 13:01:41.363207] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:01.537 [2024-11-25 13:01:41.363212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.537 [2024-11-25 13:01:41.372701] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:01.537 [2024-11-25 13:01:41.372717] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:01.537 [2024-11-25 13:01:41.372722] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:01.537 [2024-11-25 13:01:41.372727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.537 [2024-11-25 13:01:41.372745] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:01.537 [2024-11-25 13:01:41.373014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.537 [2024-11-25 13:01:41.373030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965d90 with addr=10.0.0.2, port=4420 00:27:01.537 [2024-11-25 13:01:41.373038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.537 [2024-11-25 13:01:41.373050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.537 [2024-11-25 13:01:41.373061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.537 [2024-11-25 13:01:41.373068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.537 [2024-11-25 13:01:41.373076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.537 [2024-11-25 13:01:41.373082] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:01.537 [2024-11-25 13:01:41.373087] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:01.537 [2024-11-25 13:01:41.373092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.537 [2024-11-25 13:01:41.382777] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:01.537 [2024-11-25 13:01:41.382789] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:01.537 [2024-11-25 13:01:41.382794] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:01.537 [2024-11-25 13:01:41.382798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.537 [2024-11-25 13:01:41.382814] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:01.537 [2024-11-25 13:01:41.383093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.537 [2024-11-25 13:01:41.383107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965d90 with addr=10.0.0.2, port=4420 00:27:01.537 [2024-11-25 13:01:41.383115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.537 [2024-11-25 13:01:41.383126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.537 [2024-11-25 13:01:41.383137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.537 [2024-11-25 13:01:41.383143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.537 [2024-11-25 13:01:41.383150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.537 [2024-11-25 13:01:41.383157] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:01.537 [2024-11-25 13:01:41.383161] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:01.537 [2024-11-25 13:01:41.383166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:01.537 [2024-11-25 13:01:41.392845] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:01.537 [2024-11-25 13:01:41.392856] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:01.537 [2024-11-25 13:01:41.392860] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:01.537 [2024-11-25 13:01:41.392869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.537 [2024-11-25 13:01:41.392884] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:01.537 [2024-11-25 13:01:41.393164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.537 [2024-11-25 13:01:41.393179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965d90 with addr=10.0.0.2, port=4420 00:27:01.537 [2024-11-25 13:01:41.393187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.537 [2024-11-25 13:01:41.393198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.537 [2024-11-25 13:01:41.393208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.537 [2024-11-25 13:01:41.393215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.537 [2024-11-25 13:01:41.393222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.537 [2024-11-25 13:01:41.393228] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:01.537 [2024-11-25 13:01:41.393233] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:01.537 [2024-11-25 13:01:41.393237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.537 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:01.537 [2024-11-25 13:01:41.402915] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:01.537 [2024-11-25 13:01:41.402929] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:01.537 [2024-11-25 13:01:41.402933] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:01.537 [2024-11-25 13:01:41.402938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.537 [2024-11-25 13:01:41.402953] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:01.537 [2024-11-25 13:01:41.403272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.537 [2024-11-25 13:01:41.403285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1965d90 with addr=10.0.0.2, port=4420 00:27:01.537 [2024-11-25 13:01:41.403292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1965d90 is same with the state(6) to be set 00:27:01.537 [2024-11-25 13:01:41.403303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1965d90 (9): Bad file descriptor 00:27:01.537 [2024-11-25 13:01:41.403314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.537 [2024-11-25 13:01:41.403320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.537 [2024-11-25 13:01:41.403328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.537 [2024-11-25 13:01:41.403334] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:01.537 [2024-11-25 13:01:41.403338] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:01.537 [2024-11-25 13:01:41.403343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.537 [2024-11-25 13:01:41.406462] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:01.537 [2024-11-25 13:01:41.406482] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:01.538 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.799 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:27:01.799 13:01:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:02.740 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.001 13:01:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.944 [2024-11-25 13:01:43.778848] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:03.944 [2024-11-25 13:01:43.778868] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:03.944 [2024-11-25 13:01:43.778881] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:04.206 [2024-11-25 13:01:43.906303] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:04.206 [2024-11-25 13:01:44.011149] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:27:04.206 [2024-11-25 13:01:44.011910] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1994d10:1 started. 00:27:04.206 [2024-11-25 13:01:44.013758] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:04.206 [2024-11-25 13:01:44.013786] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.206 request: 00:27:04.206 { 00:27:04.206 "name": "nvme", 00:27:04.206 "trtype": "tcp", 00:27:04.206 "traddr": "10.0.0.2", 00:27:04.206 "adrfam": "ipv4", 00:27:04.206 "trsvcid": "8009", 00:27:04.206 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:04.206 "wait_for_attach": true, 00:27:04.206 "method": "bdev_nvme_start_discovery", 00:27:04.206 "req_id": 1 00:27:04.206 } 00:27:04.206 Got JSON-RPC error response 00:27:04.206 response: 00:27:04.206 { 00:27:04.206 "code": -17, 00:27:04.206 "message": "File exists" 00:27:04.206 } 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.206 [2024-11-25 13:01:44.057645] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1994d10 was disconnected and freed. delete nvme_qpair. 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.206 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.468 request: 00:27:04.468 { 00:27:04.468 "name": "nvme_second", 00:27:04.468 "trtype": "tcp", 00:27:04.468 "traddr": "10.0.0.2", 00:27:04.468 "adrfam": "ipv4", 00:27:04.468 "trsvcid": "8009", 00:27:04.468 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:04.468 "wait_for_attach": true, 00:27:04.468 "method": "bdev_nvme_start_discovery", 00:27:04.468 "req_id": 1 00:27:04.468 } 00:27:04.468 Got JSON-RPC error response 00:27:04.468 response: 00:27:04.468 { 00:27:04.468 "code": -17, 00:27:04.468 "message": "File exists" 00:27:04.468 } 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.468 13:01:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.408 [2024-11-25 13:01:45.273257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.408 [2024-11-25 13:01:45.273286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1963430 with addr=10.0.0.2, port=8010 00:27:05.408 [2024-11-25 13:01:45.273299] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:05.408 [2024-11-25 13:01:45.273306] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:05.408 [2024-11-25 13:01:45.273313] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:06.792 [2024-11-25 13:01:46.275571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.792 [2024-11-25 13:01:46.275594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197dbe0 with addr=10.0.0.2, port=8010 00:27:06.792 [2024-11-25 13:01:46.275606] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:06.792 [2024-11-25 13:01:46.275613] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:06.792 [2024-11-25 13:01:46.275619] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:07.390 [2024-11-25 13:01:47.277595] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:07.390 request: 00:27:07.391 { 00:27:07.391 "name": "nvme_second", 00:27:07.391 "trtype": "tcp", 00:27:07.391 "traddr": "10.0.0.2", 00:27:07.720 "adrfam": "ipv4", 00:27:07.720 "trsvcid": "8010", 00:27:07.720 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:07.720 "wait_for_attach": false, 00:27:07.720 "attach_timeout_ms": 3000, 00:27:07.720 "method": "bdev_nvme_start_discovery", 00:27:07.720 "req_id": 1 00:27:07.720 } 00:27:07.720 Got JSON-RPC error response 00:27:07.720 response: 00:27:07.720 { 00:27:07.720 "code": -110, 00:27:07.720 "message": "Connection timed out" 00:27:07.720 } 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 764600 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:07.720 rmmod nvme_tcp 00:27:07.720 rmmod nvme_fabrics 00:27:07.720 rmmod nvme_keyring 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 764415 ']' 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 764415 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 764415 ']' 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 764415 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 764415 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 764415' 00:27:07.720 killing process with pid 764415 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 764415 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 764415 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:07.720 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:27:07.992 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:07.992 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:07.992 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.992 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.992 13:01:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.923 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:09.923 00:27:09.923 real 0m22.042s 00:27:09.923 user 0m25.355s 00:27:09.923 sys 0m7.979s 00:27:09.923 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.923 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:09.923 ************************************ 00:27:09.923 END TEST nvmf_host_discovery 00:27:09.923 ************************************ 00:27:09.923 13:01:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:09.923 13:01:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:09.923 13:01:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.923 13:01:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.923 ************************************ 00:27:09.923 START TEST nvmf_host_multipath_status 00:27:09.923 ************************************ 00:27:09.923 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:09.923 * Looking for test storage... 00:27:10.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:10.184 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:10.184 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:27:10.184 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:10.184 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.185 --rc genhtml_branch_coverage=1 00:27:10.185 --rc genhtml_function_coverage=1 00:27:10.185 --rc genhtml_legend=1 00:27:10.185 --rc geninfo_all_blocks=1 00:27:10.185 --rc geninfo_unexecuted_blocks=1 00:27:10.185 00:27:10.185 ' 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.185 --rc genhtml_branch_coverage=1 00:27:10.185 --rc genhtml_function_coverage=1 00:27:10.185 --rc genhtml_legend=1 00:27:10.185 --rc geninfo_all_blocks=1 00:27:10.185 --rc geninfo_unexecuted_blocks=1 00:27:10.185 00:27:10.185 ' 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.185 --rc genhtml_branch_coverage=1 00:27:10.185 --rc genhtml_function_coverage=1 00:27:10.185 --rc genhtml_legend=1 00:27:10.185 --rc geninfo_all_blocks=1 00:27:10.185 --rc geninfo_unexecuted_blocks=1 00:27:10.185 00:27:10.185 ' 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:10.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.185 --rc genhtml_branch_coverage=1 00:27:10.185 --rc genhtml_function_coverage=1 00:27:10.185 --rc genhtml_legend=1 00:27:10.185 --rc geninfo_all_blocks=1 00:27:10.185 --rc geninfo_unexecuted_blocks=1 00:27:10.185 00:27:10.185 ' 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.185 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:10.186 13:01:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:18.329 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:18.329 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:18.329 Found net devices under 0000:31:00.0: cvl_0_0 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:18.329 Found net devices under 0000:31:00.1: cvl_0_1 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.329 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:18.330 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:18.330 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.330 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.330 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.330 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.330 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:18.330 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:18.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:27:18.591 00:27:18.591 --- 10.0.0.2 ping statistics --- 00:27:18.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.591 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:27:18.591 00:27:18.591 --- 10.0.0.1 ping statistics --- 00:27:18.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.591 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=771480 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 771480 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 771480 ']' 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.591 13:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:18.591 [2024-11-25 13:01:58.436052] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:27:18.591 [2024-11-25 13:01:58.436105] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.852 [2024-11-25 13:01:58.522100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:18.852 [2024-11-25 13:01:58.557234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.852 [2024-11-25 13:01:58.557265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.852 [2024-11-25 13:01:58.557276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.852 [2024-11-25 13:01:58.557283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.852 [2024-11-25 13:01:58.557289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.852 [2024-11-25 13:01:58.558529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.852 [2024-11-25 13:01:58.558532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.423 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:19.423 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:19.423 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:19.423 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:19.423 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:19.423 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.423 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=771480 00:27:19.423 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:19.684 [2024-11-25 13:01:59.403593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.684 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:19.945 Malloc0 00:27:19.945 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:19.945 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:20.207 13:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:20.207 [2024-11-25 13:02:00.083394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.207 13:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:20.469 [2024-11-25 13:02:00.251803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:20.469 13:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:20.469 13:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=771843 00:27:20.469 13:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:20.469 13:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 771843 /var/tmp/bdevperf.sock 00:27:20.469 13:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 771843 ']' 00:27:20.469 13:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:20.469 13:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:20.469 13:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:20.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:20.469 13:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:20.469 13:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:21.412 13:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.412 13:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:21.412 13:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:21.412 13:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:21.982 Nvme0n1 00:27:21.982 13:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:22.553 Nvme0n1 00:27:22.553 13:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:22.553 13:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:24.466 13:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:24.466 13:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:24.728 13:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:24.728 13:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.114 13:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:26.375 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.375 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:26.375 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.375 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:26.637 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.637 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:26.637 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.637 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:26.637 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.637 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:26.637 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.637 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:26.898 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.898 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:26.898 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:27.159 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:27.159 13:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:28.166 13:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:28.166 13:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:28.166 13:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.166 13:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:28.428 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:28.428 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:28.428 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.428 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:28.690 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.690 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:28.690 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.690 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:28.690 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.690 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:28.690 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.690 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:28.951 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.951 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:28.951 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.951 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:29.212 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.212 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:29.212 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.212 13:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:29.212 13:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.212 13:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:29.212 13:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:29.472 13:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:29.732 13:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:30.677 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:30.677 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:30.677 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.677 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:30.938 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.938 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:30.938 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.938 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:31.199 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:31.199 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:31.199 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.199 13:02:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:31.199 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.199 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:31.199 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.199 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:31.461 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.461 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:31.461 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.461 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:31.722 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.722 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:31.722 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.722 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:31.722 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.722 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:31.722 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:31.983 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:32.243 13:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:33.186 13:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:33.186 13:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:33.186 13:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.186 13:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:33.447 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.447 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:33.447 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.447 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:33.447 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:33.447 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:33.447 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.447 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:33.709 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.709 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:33.709 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.709 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:33.971 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.971 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:33.971 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.971 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:33.971 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.971 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:33.971 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.971 13:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:34.232 13:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:34.232 13:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:34.232 13:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:34.493 13:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:34.754 13:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:35.698 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:35.698 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:35.698 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.698 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:35.959 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:35.959 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:35.959 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.959 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:35.959 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:35.959 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:35.959 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.959 13:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:36.219 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.220 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:36.220 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.220 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:36.480 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.480 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:36.480 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.480 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:36.740 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:36.740 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:36.740 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.740 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:36.740 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:36.740 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:36.740 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:37.001 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:37.261 13:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:38.204 13:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:38.204 13:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:38.204 13:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.204 13:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:38.466 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:38.466 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:38.466 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.466 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:38.466 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.466 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:38.466 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.466 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:38.727 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.727 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:38.727 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.727 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:38.988 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.988 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:38.988 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.988 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:38.988 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:38.988 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:38.989 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.989 13:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:39.249 13:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:39.249 13:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:39.510 13:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:39.510 13:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:39.510 13:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:39.774 13:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:40.716 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:40.716 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:40.716 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.716 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:40.977 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.977 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:40.977 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.977 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:41.238 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.238 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:41.238 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.238 13:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:41.497 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.497 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:41.497 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.497 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:41.497 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.497 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:41.497 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:41.497 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.756 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.756 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:41.756 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.756 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:42.014 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.014 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:42.014 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:42.274 13:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:42.274 13:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:43.213 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:43.213 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:43.213 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.213 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:43.472 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:43.472 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:43.472 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.472 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:43.732 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.732 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:43.732 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.732 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:43.992 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.993 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:43.993 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.993 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:43.993 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.993 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:43.993 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.993 13:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:44.253 13:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.253 13:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:44.253 13:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.253 13:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:44.514 13:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.514 13:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:44.514 13:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:44.514 13:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:44.775 13:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:45.714 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:45.714 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:45.714 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.714 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:45.975 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.975 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:45.975 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.975 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:46.235 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.235 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:46.235 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.235 13:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:46.235 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.235 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:46.235 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.235 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:46.495 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.495 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:46.495 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.495 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:46.754 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.754 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:46.754 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.754 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:47.014 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:47.014 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:47.014 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:47.014 13:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:47.274 13:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:48.215 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:48.215 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:48.215 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.215 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:48.475 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.475 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:48.475 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:48.475 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.736 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.736 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:48.736 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.736 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:48.736 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.736 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:48.736 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.736 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:48.997 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.997 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:48.997 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.997 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:49.257 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:49.257 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:49.257 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:49.257 13:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 771843 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 771843 ']' 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 771843 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 771843 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 771843' 00:27:49.529 killing process with pid 771843 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 771843 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 771843 00:27:49.529 { 00:27:49.529 "results": [ 00:27:49.529 { 00:27:49.529 "job": "Nvme0n1", 00:27:49.529 "core_mask": "0x4", 00:27:49.529 "workload": "verify", 00:27:49.529 "status": "terminated", 00:27:49.529 "verify_range": { 00:27:49.529 "start": 0, 00:27:49.529 "length": 16384 00:27:49.529 }, 00:27:49.529 "queue_depth": 128, 00:27:49.529 "io_size": 4096, 00:27:49.529 "runtime": 26.901491, 00:27:49.529 "iops": 10774.161179393364, 00:27:49.529 "mibps": 42.08656710700533, 00:27:49.529 "io_failed": 0, 00:27:49.529 "io_timeout": 0, 00:27:49.529 "avg_latency_us": 11862.171811993474, 00:27:49.529 "min_latency_us": 127.14666666666666, 00:27:49.529 "max_latency_us": 3019898.88 00:27:49.529 } 00:27:49.529 ], 00:27:49.529 "core_count": 1 00:27:49.529 } 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 771843 00:27:49.529 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:49.529 [2024-11-25 13:02:00.327534] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:27:49.529 [2024-11-25 13:02:00.327590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771843 ] 00:27:49.529 [2024-11-25 13:02:00.392549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.529 [2024-11-25 13:02:00.421033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.529 Running I/O for 90 seconds... 00:27:49.529 9453.00 IOPS, 36.93 MiB/s [2024-11-25T12:02:29.432Z] 9562.00 IOPS, 37.35 MiB/s [2024-11-25T12:02:29.432Z] 9601.33 IOPS, 37.51 MiB/s [2024-11-25T12:02:29.432Z] 9583.75 IOPS, 37.44 MiB/s [2024-11-25T12:02:29.432Z] 9845.40 IOPS, 38.46 MiB/s [2024-11-25T12:02:29.432Z] 10367.83 IOPS, 40.50 MiB/s [2024-11-25T12:02:29.432Z] 10725.57 IOPS, 41.90 MiB/s [2024-11-25T12:02:29.432Z] 10672.25 IOPS, 41.69 MiB/s [2024-11-25T12:02:29.432Z] 10549.22 IOPS, 41.21 MiB/s [2024-11-25T12:02:29.432Z] 10451.50 IOPS, 40.83 MiB/s [2024-11-25T12:02:29.432Z] 10383.09 IOPS, 40.56 MiB/s [2024-11-25T12:02:29.432Z] [2024-11-25 13:02:14.223077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:49.529 [2024-11-25 13:02:14.223674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.529 [2024-11-25 13:02:14.223679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.223697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.223713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.223728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.223745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.223762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.223779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.223797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.223813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.223830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.223849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.223869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.223885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.223901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.223916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.223932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.223948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.223963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.223974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.223979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.224984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.224989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.225045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.225063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.225081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.225099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.225117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.225137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.225155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.530 [2024-11-25 13:02:14.225172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:49.530 [2024-11-25 13:02:14.225444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.530 [2024-11-25 13:02:14.225450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.225987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.225991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.226013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.226301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:14.226323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:14.226344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:14.226366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:14.226388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:14.226409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:14.226430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:14.226452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:14.226472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.226495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.226516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.226537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.226558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.226578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.226600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.226621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:14.226667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:14.226674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:49.531 10246.58 IOPS, 40.03 MiB/s [2024-11-25T12:02:29.434Z] 9458.38 IOPS, 36.95 MiB/s [2024-11-25T12:02:29.434Z] 8782.79 IOPS, 34.31 MiB/s [2024-11-25T12:02:29.434Z] 8270.20 IOPS, 32.31 MiB/s [2024-11-25T12:02:29.434Z] 8561.62 IOPS, 33.44 MiB/s [2024-11-25T12:02:29.434Z] 8816.47 IOPS, 34.44 MiB/s [2024-11-25T12:02:29.434Z] 9238.22 IOPS, 36.09 MiB/s [2024-11-25T12:02:29.434Z] 9637.47 IOPS, 37.65 MiB/s [2024-11-25T12:02:29.434Z] 9925.00 IOPS, 38.77 MiB/s [2024-11-25T12:02:29.434Z] 10069.67 IOPS, 39.33 MiB/s [2024-11-25T12:02:29.434Z] 10197.05 IOPS, 39.83 MiB/s [2024-11-25T12:02:29.434Z] 10448.65 IOPS, 40.82 MiB/s [2024-11-25T12:02:29.434Z] 10713.29 IOPS, 41.85 MiB/s [2024-11-25T12:02:29.434Z] [2024-11-25 13:02:27.015430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:27.015470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.015501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:27.015507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.015518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:27.015528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.015539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:27.015544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.015555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:27.015560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.015571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:27.015576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.015587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:27.015592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.015603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:27.015608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.016643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.531 [2024-11-25 13:02:27.016656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.016668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:27.016674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.016685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:27.016691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.016701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:27.016707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.016717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:27.016722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.016733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:27.016738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.016749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:27.016754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:49.531 [2024-11-25 13:02:27.016766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.531 [2024-11-25 13:02:27.016772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:49.531 10863.56 IOPS, 42.44 MiB/s [2024-11-25T12:02:29.434Z] 10814.27 IOPS, 42.24 MiB/s [2024-11-25T12:02:29.434Z] Received shutdown signal, test time was about 26.902103 seconds 00:27:49.531 00:27:49.531 Latency(us) 00:27:49.531 [2024-11-25T12:02:29.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.531 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:49.531 Verification LBA range: start 0x0 length 0x4000 00:27:49.531 Nvme0n1 : 26.90 10774.16 42.09 0.00 0.00 11862.17 127.15 3019898.88 00:27:49.531 [2024-11-25T12:02:29.434Z] =================================================================================================================== 00:27:49.531 [2024-11-25T12:02:29.434Z] Total : 10774.16 42.09 0.00 0.00 11862.17 127.15 3019898.88 00:27:49.531 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.791 rmmod nvme_tcp 00:27:49.791 rmmod nvme_fabrics 00:27:49.791 rmmod nvme_keyring 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 771480 ']' 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 771480 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 771480 ']' 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 771480 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 771480 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 771480' 00:27:49.791 killing process with pid 771480 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 771480 00:27:49.791 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 771480 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.052 13:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.597 13:02:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:52.597 00:27:52.597 real 0m42.151s 00:27:52.597 user 1m46.861s 00:27:52.597 sys 0m12.385s 00:27:52.597 13:02:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:52.597 13:02:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:52.597 ************************************ 00:27:52.597 END TEST nvmf_host_multipath_status 00:27:52.597 ************************************ 00:27:52.597 13:02:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:52.597 13:02:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:52.597 13:02:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:52.597 13:02:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.597 ************************************ 00:27:52.597 START TEST nvmf_discovery_remove_ifc 00:27:52.597 ************************************ 00:27:52.597 13:02:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:52.597 * Looking for test storage... 00:27:52.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:52.597 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:52.597 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:52.597 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:52.597 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:52.597 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:52.597 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:52.597 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:52.597 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:52.597 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:52.597 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:52.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.598 --rc genhtml_branch_coverage=1 00:27:52.598 --rc genhtml_function_coverage=1 00:27:52.598 --rc genhtml_legend=1 00:27:52.598 --rc geninfo_all_blocks=1 00:27:52.598 --rc geninfo_unexecuted_blocks=1 00:27:52.598 00:27:52.598 ' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:52.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.598 --rc genhtml_branch_coverage=1 00:27:52.598 --rc genhtml_function_coverage=1 00:27:52.598 --rc genhtml_legend=1 00:27:52.598 --rc geninfo_all_blocks=1 00:27:52.598 --rc geninfo_unexecuted_blocks=1 00:27:52.598 00:27:52.598 ' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:52.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.598 --rc genhtml_branch_coverage=1 00:27:52.598 --rc genhtml_function_coverage=1 00:27:52.598 --rc genhtml_legend=1 00:27:52.598 --rc geninfo_all_blocks=1 00:27:52.598 --rc geninfo_unexecuted_blocks=1 00:27:52.598 00:27:52.598 ' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:52.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.598 --rc genhtml_branch_coverage=1 00:27:52.598 --rc genhtml_function_coverage=1 00:27:52.598 --rc genhtml_legend=1 00:27:52.598 --rc geninfo_all_blocks=1 00:27:52.598 --rc geninfo_unexecuted_blocks=1 00:27:52.598 00:27:52.598 ' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:52.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:52.598 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:52.599 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:52.599 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.599 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.599 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.599 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:52.599 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:52.599 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:52.599 13:02:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.818 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:00.819 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:00.819 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:00.819 Found net devices under 0000:31:00.0: cvl_0_0 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:00.819 Found net devices under 0000:31:00.1: cvl_0_1 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.819 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:00.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:28:00.820 00:28:00.820 --- 10.0.0.2 ping statistics --- 00:28:00.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.820 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:28:00.820 00:28:00.820 --- 10.0.0.1 ping statistics --- 00:28:00.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.820 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=782409 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 782409 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 782409 ']' 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.820 13:02:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:00.820 [2024-11-25 13:02:40.528379] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:28:00.820 [2024-11-25 13:02:40.528430] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.820 [2024-11-25 13:02:40.630113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.820 [2024-11-25 13:02:40.664306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.820 [2024-11-25 13:02:40.664341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.820 [2024-11-25 13:02:40.664348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.820 [2024-11-25 13:02:40.664355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.820 [2024-11-25 13:02:40.664361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.820 [2024-11-25 13:02:40.664959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:01.762 [2024-11-25 13:02:41.425434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.762 [2024-11-25 13:02:41.433691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:01.762 null0 00:28:01.762 [2024-11-25 13:02:41.465630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=782444 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 782444 /tmp/host.sock 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 782444 ']' 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:01.762 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:01.762 13:02:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:01.762 [2024-11-25 13:02:41.542244] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:28:01.762 [2024-11-25 13:02:41.542313] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782444 ] 00:28:01.762 [2024-11-25 13:02:41.626320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.023 [2024-11-25 13:02:41.669363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.594 13:02:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:03.979 [2024-11-25 13:02:43.455057] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:03.979 [2024-11-25 13:02:43.455077] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:03.979 [2024-11-25 13:02:43.455090] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:03.979 [2024-11-25 13:02:43.542381] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:03.979 [2024-11-25 13:02:43.725543] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:03.979 [2024-11-25 13:02:43.726649] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1be86e0:1 started. 00:28:03.979 [2024-11-25 13:02:43.728224] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:03.979 [2024-11-25 13:02:43.728271] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:03.979 [2024-11-25 13:02:43.728292] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:03.979 [2024-11-25 13:02:43.728306] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:03.979 [2024-11-25 13:02:43.728326] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.979 [2024-11-25 13:02:43.774661] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1be86e0 was disconnected and freed. delete nvme_qpair. 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:03.979 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:04.241 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:04.241 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:04.241 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:04.241 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:04.241 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:04.241 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.241 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:04.241 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:04.241 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.241 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:04.241 13:02:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:05.182 13:02:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:05.182 13:02:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:05.182 13:02:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:05.182 13:02:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.182 13:02:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:05.182 13:02:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:05.182 13:02:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:05.182 13:02:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.183 13:02:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:05.183 13:02:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:06.125 13:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:06.125 13:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:06.125 13:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:06.125 13:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:06.125 13:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.125 13:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:06.125 13:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:06.384 13:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.384 13:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:06.384 13:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:07.327 13:02:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:07.327 13:02:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:07.327 13:02:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.327 13:02:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.327 13:02:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:07.327 13:02:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:07.327 13:02:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:07.327 13:02:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.327 13:02:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:07.327 13:02:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:08.269 13:02:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:08.269 13:02:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:08.269 13:02:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.269 13:02:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:08.269 13:02:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.269 13:02:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:08.269 13:02:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:08.269 13:02:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.530 13:02:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:08.530 13:02:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:09.480 [2024-11-25 13:02:49.168930] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:09.480 [2024-11-25 13:02:49.168970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.480 [2024-11-25 13:02:49.168983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.480 [2024-11-25 13:02:49.168992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.480 [2024-11-25 13:02:49.169000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.480 [2024-11-25 13:02:49.169009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.480 [2024-11-25 13:02:49.169016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.480 [2024-11-25 13:02:49.169024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.480 [2024-11-25 13:02:49.169032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.480 [2024-11-25 13:02:49.169040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:09.480 [2024-11-25 13:02:49.169048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:09.480 [2024-11-25 13:02:49.169055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc5090 is same with the state(6) to be set 00:28:09.480 [2024-11-25 13:02:49.178951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc5090 (9): Bad file descriptor 00:28:09.480 13:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:09.480 13:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:09.480 13:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:09.480 13:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.480 13:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:09.480 13:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:09.480 13:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:09.480 [2024-11-25 13:02:49.188986] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:09.480 [2024-11-25 13:02:49.188999] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:09.480 [2024-11-25 13:02:49.189005] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:09.480 [2024-11-25 13:02:49.189010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:09.480 [2024-11-25 13:02:49.189030] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:10.427 [2024-11-25 13:02:50.203909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:10.427 [2024-11-25 13:02:50.203954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc5090 with addr=10.0.0.2, port=4420 00:28:10.427 [2024-11-25 13:02:50.203968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc5090 is same with the state(6) to be set 00:28:10.427 [2024-11-25 13:02:50.203996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc5090 (9): Bad file descriptor 00:28:10.427 [2024-11-25 13:02:50.204374] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:28:10.427 [2024-11-25 13:02:50.204398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:10.427 [2024-11-25 13:02:50.204406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:10.427 [2024-11-25 13:02:50.204415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:10.427 [2024-11-25 13:02:50.204423] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:10.427 [2024-11-25 13:02:50.204429] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:10.427 [2024-11-25 13:02:50.204434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:10.427 [2024-11-25 13:02:50.204443] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:10.427 [2024-11-25 13:02:50.204449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:10.427 13:02:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.427 13:02:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:10.427 13:02:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:11.368 [2024-11-25 13:02:51.206821] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:11.368 [2024-11-25 13:02:51.206841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:11.368 [2024-11-25 13:02:51.206852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:11.368 [2024-11-25 13:02:51.206860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:11.368 [2024-11-25 13:02:51.206871] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:28:11.368 [2024-11-25 13:02:51.206883] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:11.368 [2024-11-25 13:02:51.206889] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:11.368 [2024-11-25 13:02:51.206893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:11.368 [2024-11-25 13:02:51.206915] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:11.368 [2024-11-25 13:02:51.206937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.368 [2024-11-25 13:02:51.206947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.368 [2024-11-25 13:02:51.206958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.368 [2024-11-25 13:02:51.206965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.368 [2024-11-25 13:02:51.206973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.368 [2024-11-25 13:02:51.206981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.368 [2024-11-25 13:02:51.206988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.368 [2024-11-25 13:02:51.206995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.368 [2024-11-25 13:02:51.207004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:11.368 [2024-11-25 13:02:51.207011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.368 [2024-11-25 13:02:51.207018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:28:11.368 [2024-11-25 13:02:51.207388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb43c0 (9): Bad file descriptor 00:28:11.368 [2024-11-25 13:02:51.208400] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:11.368 [2024-11-25 13:02:51.208412] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:28:11.368 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:11.368 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.368 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:11.368 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.368 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:11.368 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:11.368 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:11.368 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:11.628 13:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:12.569 13:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:12.569 13:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:12.569 13:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.569 13:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:12.569 13:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.569 13:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:12.569 13:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:12.569 13:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.831 13:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:12.831 13:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:13.403 [2024-11-25 13:02:53.219136] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:13.403 [2024-11-25 13:02:53.219154] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:13.403 [2024-11-25 13:02:53.219167] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:13.664 [2024-11-25 13:02:53.346571] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:13.664 [2024-11-25 13:02:53.447446] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:28:13.664 [2024-11-25 13:02:53.448362] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1bb6860:1 started. 00:28:13.664 [2024-11-25 13:02:53.449575] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:13.664 [2024-11-25 13:02:53.449611] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:13.664 [2024-11-25 13:02:53.449631] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:13.664 [2024-11-25 13:02:53.449645] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:13.664 [2024-11-25 13:02:53.449653] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:13.664 [2024-11-25 13:02:53.457187] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1bb6860 was disconnected and freed. delete nvme_qpair. 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 782444 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 782444 ']' 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 782444 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.664 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 782444 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 782444' 00:28:13.960 killing process with pid 782444 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 782444 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 782444 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.960 rmmod nvme_tcp 00:28:13.960 rmmod nvme_fabrics 00:28:13.960 rmmod nvme_keyring 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 782409 ']' 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 782409 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 782409 ']' 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 782409 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.960 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 782409 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 782409' 00:28:14.221 killing process with pid 782409 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 782409 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 782409 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.221 13:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:16.762 00:28:16.762 real 0m24.098s 00:28:16.762 user 0m27.535s 00:28:16.762 sys 0m7.566s 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:16.762 ************************************ 00:28:16.762 END TEST nvmf_discovery_remove_ifc 00:28:16.762 ************************************ 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.762 ************************************ 00:28:16.762 START TEST nvmf_identify_kernel_target 00:28:16.762 ************************************ 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:16.762 * Looking for test storage... 00:28:16.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:16.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.762 --rc genhtml_branch_coverage=1 00:28:16.762 --rc genhtml_function_coverage=1 00:28:16.762 --rc genhtml_legend=1 00:28:16.762 --rc geninfo_all_blocks=1 00:28:16.762 --rc geninfo_unexecuted_blocks=1 00:28:16.762 00:28:16.762 ' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:16.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.762 --rc genhtml_branch_coverage=1 00:28:16.762 --rc genhtml_function_coverage=1 00:28:16.762 --rc genhtml_legend=1 00:28:16.762 --rc geninfo_all_blocks=1 00:28:16.762 --rc geninfo_unexecuted_blocks=1 00:28:16.762 00:28:16.762 ' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:16.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.762 --rc genhtml_branch_coverage=1 00:28:16.762 --rc genhtml_function_coverage=1 00:28:16.762 --rc genhtml_legend=1 00:28:16.762 --rc geninfo_all_blocks=1 00:28:16.762 --rc geninfo_unexecuted_blocks=1 00:28:16.762 00:28:16.762 ' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:16.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.762 --rc genhtml_branch_coverage=1 00:28:16.762 --rc genhtml_function_coverage=1 00:28:16.762 --rc genhtml_legend=1 00:28:16.762 --rc geninfo_all_blocks=1 00:28:16.762 --rc geninfo_unexecuted_blocks=1 00:28:16.762 00:28:16.762 ' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:16.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.762 13:02:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:24.909 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:24.909 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:24.909 Found net devices under 0000:31:00.0: cvl_0_0 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.909 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:24.910 Found net devices under 0000:31:00.1: cvl_0_1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:28:24.910 00:28:24.910 --- 10.0.0.2 ping statistics --- 00:28:24.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.910 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:28:24.910 00:28:24.910 --- 10.0.0.1 ping statistics --- 00:28:24.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.910 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:24.910 13:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:29.121 Waiting for block devices as requested 00:28:29.121 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:29.121 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:29.121 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:29.121 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:29.121 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:29.382 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:29.382 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:29.382 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:29.382 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:29.643 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:29.643 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:29.905 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:29.905 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:29.905 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:29.905 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:30.166 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:30.166 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:30.428 No valid GPT data, bailing 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:30.428 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:30.689 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:30.689 00:28:30.689 Discovery Log Number of Records 2, Generation counter 2 00:28:30.689 =====Discovery Log Entry 0====== 00:28:30.689 trtype: tcp 00:28:30.689 adrfam: ipv4 00:28:30.689 subtype: current discovery subsystem 00:28:30.689 treq: not specified, sq flow control disable supported 00:28:30.689 portid: 1 00:28:30.689 trsvcid: 4420 00:28:30.689 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:30.689 traddr: 10.0.0.1 00:28:30.689 eflags: none 00:28:30.689 sectype: none 00:28:30.689 =====Discovery Log Entry 1====== 00:28:30.689 trtype: tcp 00:28:30.689 adrfam: ipv4 00:28:30.689 subtype: nvme subsystem 00:28:30.689 treq: not specified, sq flow control disable supported 00:28:30.689 portid: 1 00:28:30.689 trsvcid: 4420 00:28:30.689 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:30.689 traddr: 10.0.0.1 00:28:30.689 eflags: none 00:28:30.689 sectype: none 00:28:30.689 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:30.689 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:30.689 ===================================================== 00:28:30.689 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:30.689 ===================================================== 00:28:30.689 Controller Capabilities/Features 00:28:30.689 ================================ 00:28:30.689 Vendor ID: 0000 00:28:30.689 Subsystem Vendor ID: 0000 00:28:30.689 Serial Number: e162420eac17142a3c83 00:28:30.689 Model Number: Linux 00:28:30.689 Firmware Version: 6.8.9-20 00:28:30.689 Recommended Arb Burst: 0 00:28:30.689 IEEE OUI Identifier: 00 00 00 00:28:30.689 Multi-path I/O 00:28:30.689 May have multiple subsystem ports: No 00:28:30.689 May have multiple controllers: No 00:28:30.689 Associated with SR-IOV VF: No 00:28:30.689 Max Data Transfer Size: Unlimited 00:28:30.689 Max Number of Namespaces: 0 00:28:30.689 Max Number of I/O Queues: 1024 00:28:30.689 NVMe Specification Version (VS): 1.3 00:28:30.689 NVMe Specification Version (Identify): 1.3 00:28:30.689 Maximum Queue Entries: 1024 00:28:30.689 Contiguous Queues Required: No 00:28:30.689 Arbitration Mechanisms Supported 00:28:30.689 Weighted Round Robin: Not Supported 00:28:30.689 Vendor Specific: Not Supported 00:28:30.689 Reset Timeout: 7500 ms 00:28:30.689 Doorbell Stride: 4 bytes 00:28:30.689 NVM Subsystem Reset: Not Supported 00:28:30.689 Command Sets Supported 00:28:30.689 NVM Command Set: Supported 00:28:30.689 Boot Partition: Not Supported 00:28:30.689 Memory Page Size Minimum: 4096 bytes 00:28:30.689 Memory Page Size Maximum: 4096 bytes 00:28:30.689 Persistent Memory Region: Not Supported 00:28:30.689 Optional Asynchronous Events Supported 00:28:30.689 Namespace Attribute Notices: Not Supported 00:28:30.689 Firmware Activation Notices: Not Supported 00:28:30.689 ANA Change Notices: Not Supported 00:28:30.689 PLE Aggregate Log Change Notices: Not Supported 00:28:30.689 LBA Status Info Alert Notices: Not Supported 00:28:30.690 EGE Aggregate Log Change Notices: Not Supported 00:28:30.690 Normal NVM Subsystem Shutdown event: Not Supported 00:28:30.690 Zone Descriptor Change Notices: Not Supported 00:28:30.690 Discovery Log Change Notices: Supported 00:28:30.690 Controller Attributes 00:28:30.690 128-bit Host Identifier: Not Supported 00:28:30.690 Non-Operational Permissive Mode: Not Supported 00:28:30.690 NVM Sets: Not Supported 00:28:30.690 Read Recovery Levels: Not Supported 00:28:30.690 Endurance Groups: Not Supported 00:28:30.690 Predictable Latency Mode: Not Supported 00:28:30.690 Traffic Based Keep ALive: Not Supported 00:28:30.690 Namespace Granularity: Not Supported 00:28:30.690 SQ Associations: Not Supported 00:28:30.690 UUID List: Not Supported 00:28:30.690 Multi-Domain Subsystem: Not Supported 00:28:30.690 Fixed Capacity Management: Not Supported 00:28:30.690 Variable Capacity Management: Not Supported 00:28:30.690 Delete Endurance Group: Not Supported 00:28:30.690 Delete NVM Set: Not Supported 00:28:30.690 Extended LBA Formats Supported: Not Supported 00:28:30.690 Flexible Data Placement Supported: Not Supported 00:28:30.690 00:28:30.690 Controller Memory Buffer Support 00:28:30.690 ================================ 00:28:30.690 Supported: No 00:28:30.690 00:28:30.690 Persistent Memory Region Support 00:28:30.690 ================================ 00:28:30.690 Supported: No 00:28:30.690 00:28:30.690 Admin Command Set Attributes 00:28:30.690 ============================ 00:28:30.690 Security Send/Receive: Not Supported 00:28:30.690 Format NVM: Not Supported 00:28:30.690 Firmware Activate/Download: Not Supported 00:28:30.690 Namespace Management: Not Supported 00:28:30.690 Device Self-Test: Not Supported 00:28:30.690 Directives: Not Supported 00:28:30.690 NVMe-MI: Not Supported 00:28:30.690 Virtualization Management: Not Supported 00:28:30.690 Doorbell Buffer Config: Not Supported 00:28:30.690 Get LBA Status Capability: Not Supported 00:28:30.690 Command & Feature Lockdown Capability: Not Supported 00:28:30.690 Abort Command Limit: 1 00:28:30.690 Async Event Request Limit: 1 00:28:30.690 Number of Firmware Slots: N/A 00:28:30.690 Firmware Slot 1 Read-Only: N/A 00:28:30.690 Firmware Activation Without Reset: N/A 00:28:30.690 Multiple Update Detection Support: N/A 00:28:30.690 Firmware Update Granularity: No Information Provided 00:28:30.690 Per-Namespace SMART Log: No 00:28:30.690 Asymmetric Namespace Access Log Page: Not Supported 00:28:30.690 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:30.690 Command Effects Log Page: Not Supported 00:28:30.690 Get Log Page Extended Data: Supported 00:28:30.690 Telemetry Log Pages: Not Supported 00:28:30.690 Persistent Event Log Pages: Not Supported 00:28:30.690 Supported Log Pages Log Page: May Support 00:28:30.690 Commands Supported & Effects Log Page: Not Supported 00:28:30.690 Feature Identifiers & Effects Log Page:May Support 00:28:30.690 NVMe-MI Commands & Effects Log Page: May Support 00:28:30.690 Data Area 4 for Telemetry Log: Not Supported 00:28:30.690 Error Log Page Entries Supported: 1 00:28:30.690 Keep Alive: Not Supported 00:28:30.690 00:28:30.690 NVM Command Set Attributes 00:28:30.690 ========================== 00:28:30.690 Submission Queue Entry Size 00:28:30.690 Max: 1 00:28:30.690 Min: 1 00:28:30.690 Completion Queue Entry Size 00:28:30.690 Max: 1 00:28:30.690 Min: 1 00:28:30.690 Number of Namespaces: 0 00:28:30.690 Compare Command: Not Supported 00:28:30.690 Write Uncorrectable Command: Not Supported 00:28:30.690 Dataset Management Command: Not Supported 00:28:30.690 Write Zeroes Command: Not Supported 00:28:30.690 Set Features Save Field: Not Supported 00:28:30.690 Reservations: Not Supported 00:28:30.690 Timestamp: Not Supported 00:28:30.690 Copy: Not Supported 00:28:30.690 Volatile Write Cache: Not Present 00:28:30.690 Atomic Write Unit (Normal): 1 00:28:30.690 Atomic Write Unit (PFail): 1 00:28:30.690 Atomic Compare & Write Unit: 1 00:28:30.690 Fused Compare & Write: Not Supported 00:28:30.690 Scatter-Gather List 00:28:30.690 SGL Command Set: Supported 00:28:30.690 SGL Keyed: Not Supported 00:28:30.690 SGL Bit Bucket Descriptor: Not Supported 00:28:30.690 SGL Metadata Pointer: Not Supported 00:28:30.690 Oversized SGL: Not Supported 00:28:30.690 SGL Metadata Address: Not Supported 00:28:30.690 SGL Offset: Supported 00:28:30.690 Transport SGL Data Block: Not Supported 00:28:30.690 Replay Protected Memory Block: Not Supported 00:28:30.690 00:28:30.690 Firmware Slot Information 00:28:30.690 ========================= 00:28:30.690 Active slot: 0 00:28:30.690 00:28:30.690 00:28:30.690 Error Log 00:28:30.690 ========= 00:28:30.690 00:28:30.690 Active Namespaces 00:28:30.690 ================= 00:28:30.690 Discovery Log Page 00:28:30.690 ================== 00:28:30.690 Generation Counter: 2 00:28:30.690 Number of Records: 2 00:28:30.690 Record Format: 0 00:28:30.690 00:28:30.690 Discovery Log Entry 0 00:28:30.690 ---------------------- 00:28:30.690 Transport Type: 3 (TCP) 00:28:30.690 Address Family: 1 (IPv4) 00:28:30.690 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:30.690 Entry Flags: 00:28:30.690 Duplicate Returned Information: 0 00:28:30.690 Explicit Persistent Connection Support for Discovery: 0 00:28:30.690 Transport Requirements: 00:28:30.690 Secure Channel: Not Specified 00:28:30.690 Port ID: 1 (0x0001) 00:28:30.690 Controller ID: 65535 (0xffff) 00:28:30.690 Admin Max SQ Size: 32 00:28:30.690 Transport Service Identifier: 4420 00:28:30.690 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:30.690 Transport Address: 10.0.0.1 00:28:30.690 Discovery Log Entry 1 00:28:30.690 ---------------------- 00:28:30.690 Transport Type: 3 (TCP) 00:28:30.690 Address Family: 1 (IPv4) 00:28:30.690 Subsystem Type: 2 (NVM Subsystem) 00:28:30.690 Entry Flags: 00:28:30.690 Duplicate Returned Information: 0 00:28:30.690 Explicit Persistent Connection Support for Discovery: 0 00:28:30.690 Transport Requirements: 00:28:30.690 Secure Channel: Not Specified 00:28:30.690 Port ID: 1 (0x0001) 00:28:30.690 Controller ID: 65535 (0xffff) 00:28:30.690 Admin Max SQ Size: 32 00:28:30.690 Transport Service Identifier: 4420 00:28:30.690 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:30.690 Transport Address: 10.0.0.1 00:28:30.690 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:30.952 get_feature(0x01) failed 00:28:30.952 get_feature(0x02) failed 00:28:30.952 get_feature(0x04) failed 00:28:30.952 ===================================================== 00:28:30.952 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:30.952 ===================================================== 00:28:30.952 Controller Capabilities/Features 00:28:30.952 ================================ 00:28:30.952 Vendor ID: 0000 00:28:30.952 Subsystem Vendor ID: 0000 00:28:30.952 Serial Number: 244b6612e30a6e605941 00:28:30.952 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:30.952 Firmware Version: 6.8.9-20 00:28:30.952 Recommended Arb Burst: 6 00:28:30.952 IEEE OUI Identifier: 00 00 00 00:28:30.952 Multi-path I/O 00:28:30.952 May have multiple subsystem ports: Yes 00:28:30.952 May have multiple controllers: Yes 00:28:30.952 Associated with SR-IOV VF: No 00:28:30.952 Max Data Transfer Size: Unlimited 00:28:30.952 Max Number of Namespaces: 1024 00:28:30.952 Max Number of I/O Queues: 128 00:28:30.952 NVMe Specification Version (VS): 1.3 00:28:30.952 NVMe Specification Version (Identify): 1.3 00:28:30.952 Maximum Queue Entries: 1024 00:28:30.952 Contiguous Queues Required: No 00:28:30.952 Arbitration Mechanisms Supported 00:28:30.952 Weighted Round Robin: Not Supported 00:28:30.952 Vendor Specific: Not Supported 00:28:30.953 Reset Timeout: 7500 ms 00:28:30.953 Doorbell Stride: 4 bytes 00:28:30.953 NVM Subsystem Reset: Not Supported 00:28:30.953 Command Sets Supported 00:28:30.953 NVM Command Set: Supported 00:28:30.953 Boot Partition: Not Supported 00:28:30.953 Memory Page Size Minimum: 4096 bytes 00:28:30.953 Memory Page Size Maximum: 4096 bytes 00:28:30.953 Persistent Memory Region: Not Supported 00:28:30.953 Optional Asynchronous Events Supported 00:28:30.953 Namespace Attribute Notices: Supported 00:28:30.953 Firmware Activation Notices: Not Supported 00:28:30.953 ANA Change Notices: Supported 00:28:30.953 PLE Aggregate Log Change Notices: Not Supported 00:28:30.953 LBA Status Info Alert Notices: Not Supported 00:28:30.953 EGE Aggregate Log Change Notices: Not Supported 00:28:30.953 Normal NVM Subsystem Shutdown event: Not Supported 00:28:30.953 Zone Descriptor Change Notices: Not Supported 00:28:30.953 Discovery Log Change Notices: Not Supported 00:28:30.953 Controller Attributes 00:28:30.953 128-bit Host Identifier: Supported 00:28:30.953 Non-Operational Permissive Mode: Not Supported 00:28:30.953 NVM Sets: Not Supported 00:28:30.953 Read Recovery Levels: Not Supported 00:28:30.953 Endurance Groups: Not Supported 00:28:30.953 Predictable Latency Mode: Not Supported 00:28:30.953 Traffic Based Keep ALive: Supported 00:28:30.953 Namespace Granularity: Not Supported 00:28:30.953 SQ Associations: Not Supported 00:28:30.953 UUID List: Not Supported 00:28:30.953 Multi-Domain Subsystem: Not Supported 00:28:30.953 Fixed Capacity Management: Not Supported 00:28:30.953 Variable Capacity Management: Not Supported 00:28:30.953 Delete Endurance Group: Not Supported 00:28:30.953 Delete NVM Set: Not Supported 00:28:30.953 Extended LBA Formats Supported: Not Supported 00:28:30.953 Flexible Data Placement Supported: Not Supported 00:28:30.953 00:28:30.953 Controller Memory Buffer Support 00:28:30.953 ================================ 00:28:30.953 Supported: No 00:28:30.953 00:28:30.953 Persistent Memory Region Support 00:28:30.953 ================================ 00:28:30.953 Supported: No 00:28:30.953 00:28:30.953 Admin Command Set Attributes 00:28:30.953 ============================ 00:28:30.953 Security Send/Receive: Not Supported 00:28:30.953 Format NVM: Not Supported 00:28:30.953 Firmware Activate/Download: Not Supported 00:28:30.953 Namespace Management: Not Supported 00:28:30.953 Device Self-Test: Not Supported 00:28:30.953 Directives: Not Supported 00:28:30.953 NVMe-MI: Not Supported 00:28:30.953 Virtualization Management: Not Supported 00:28:30.953 Doorbell Buffer Config: Not Supported 00:28:30.953 Get LBA Status Capability: Not Supported 00:28:30.953 Command & Feature Lockdown Capability: Not Supported 00:28:30.953 Abort Command Limit: 4 00:28:30.953 Async Event Request Limit: 4 00:28:30.953 Number of Firmware Slots: N/A 00:28:30.953 Firmware Slot 1 Read-Only: N/A 00:28:30.953 Firmware Activation Without Reset: N/A 00:28:30.953 Multiple Update Detection Support: N/A 00:28:30.953 Firmware Update Granularity: No Information Provided 00:28:30.953 Per-Namespace SMART Log: Yes 00:28:30.953 Asymmetric Namespace Access Log Page: Supported 00:28:30.953 ANA Transition Time : 10 sec 00:28:30.953 00:28:30.953 Asymmetric Namespace Access Capabilities 00:28:30.953 ANA Optimized State : Supported 00:28:30.953 ANA Non-Optimized State : Supported 00:28:30.953 ANA Inaccessible State : Supported 00:28:30.953 ANA Persistent Loss State : Supported 00:28:30.953 ANA Change State : Supported 00:28:30.953 ANAGRPID is not changed : No 00:28:30.953 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:30.953 00:28:30.953 ANA Group Identifier Maximum : 128 00:28:30.953 Number of ANA Group Identifiers : 128 00:28:30.953 Max Number of Allowed Namespaces : 1024 00:28:30.953 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:30.953 Command Effects Log Page: Supported 00:28:30.953 Get Log Page Extended Data: Supported 00:28:30.953 Telemetry Log Pages: Not Supported 00:28:30.953 Persistent Event Log Pages: Not Supported 00:28:30.953 Supported Log Pages Log Page: May Support 00:28:30.953 Commands Supported & Effects Log Page: Not Supported 00:28:30.953 Feature Identifiers & Effects Log Page:May Support 00:28:30.953 NVMe-MI Commands & Effects Log Page: May Support 00:28:30.953 Data Area 4 for Telemetry Log: Not Supported 00:28:30.953 Error Log Page Entries Supported: 128 00:28:30.953 Keep Alive: Supported 00:28:30.953 Keep Alive Granularity: 1000 ms 00:28:30.953 00:28:30.953 NVM Command Set Attributes 00:28:30.953 ========================== 00:28:30.953 Submission Queue Entry Size 00:28:30.953 Max: 64 00:28:30.953 Min: 64 00:28:30.953 Completion Queue Entry Size 00:28:30.953 Max: 16 00:28:30.953 Min: 16 00:28:30.953 Number of Namespaces: 1024 00:28:30.953 Compare Command: Not Supported 00:28:30.953 Write Uncorrectable Command: Not Supported 00:28:30.953 Dataset Management Command: Supported 00:28:30.953 Write Zeroes Command: Supported 00:28:30.953 Set Features Save Field: Not Supported 00:28:30.953 Reservations: Not Supported 00:28:30.953 Timestamp: Not Supported 00:28:30.953 Copy: Not Supported 00:28:30.953 Volatile Write Cache: Present 00:28:30.953 Atomic Write Unit (Normal): 1 00:28:30.953 Atomic Write Unit (PFail): 1 00:28:30.953 Atomic Compare & Write Unit: 1 00:28:30.953 Fused Compare & Write: Not Supported 00:28:30.953 Scatter-Gather List 00:28:30.953 SGL Command Set: Supported 00:28:30.953 SGL Keyed: Not Supported 00:28:30.953 SGL Bit Bucket Descriptor: Not Supported 00:28:30.953 SGL Metadata Pointer: Not Supported 00:28:30.953 Oversized SGL: Not Supported 00:28:30.953 SGL Metadata Address: Not Supported 00:28:30.953 SGL Offset: Supported 00:28:30.953 Transport SGL Data Block: Not Supported 00:28:30.953 Replay Protected Memory Block: Not Supported 00:28:30.953 00:28:30.953 Firmware Slot Information 00:28:30.953 ========================= 00:28:30.953 Active slot: 0 00:28:30.953 00:28:30.953 Asymmetric Namespace Access 00:28:30.953 =========================== 00:28:30.953 Change Count : 0 00:28:30.953 Number of ANA Group Descriptors : 1 00:28:30.953 ANA Group Descriptor : 0 00:28:30.953 ANA Group ID : 1 00:28:30.953 Number of NSID Values : 1 00:28:30.953 Change Count : 0 00:28:30.953 ANA State : 1 00:28:30.953 Namespace Identifier : 1 00:28:30.953 00:28:30.953 Commands Supported and Effects 00:28:30.953 ============================== 00:28:30.953 Admin Commands 00:28:30.953 -------------- 00:28:30.953 Get Log Page (02h): Supported 00:28:30.953 Identify (06h): Supported 00:28:30.953 Abort (08h): Supported 00:28:30.953 Set Features (09h): Supported 00:28:30.953 Get Features (0Ah): Supported 00:28:30.953 Asynchronous Event Request (0Ch): Supported 00:28:30.953 Keep Alive (18h): Supported 00:28:30.953 I/O Commands 00:28:30.953 ------------ 00:28:30.953 Flush (00h): Supported 00:28:30.953 Write (01h): Supported LBA-Change 00:28:30.953 Read (02h): Supported 00:28:30.953 Write Zeroes (08h): Supported LBA-Change 00:28:30.953 Dataset Management (09h): Supported 00:28:30.953 00:28:30.953 Error Log 00:28:30.953 ========= 00:28:30.953 Entry: 0 00:28:30.953 Error Count: 0x3 00:28:30.953 Submission Queue Id: 0x0 00:28:30.953 Command Id: 0x5 00:28:30.953 Phase Bit: 0 00:28:30.953 Status Code: 0x2 00:28:30.953 Status Code Type: 0x0 00:28:30.953 Do Not Retry: 1 00:28:30.953 Error Location: 0x28 00:28:30.953 LBA: 0x0 00:28:30.953 Namespace: 0x0 00:28:30.953 Vendor Log Page: 0x0 00:28:30.953 ----------- 00:28:30.953 Entry: 1 00:28:30.953 Error Count: 0x2 00:28:30.953 Submission Queue Id: 0x0 00:28:30.953 Command Id: 0x5 00:28:30.953 Phase Bit: 0 00:28:30.953 Status Code: 0x2 00:28:30.953 Status Code Type: 0x0 00:28:30.953 Do Not Retry: 1 00:28:30.953 Error Location: 0x28 00:28:30.953 LBA: 0x0 00:28:30.953 Namespace: 0x0 00:28:30.953 Vendor Log Page: 0x0 00:28:30.953 ----------- 00:28:30.953 Entry: 2 00:28:30.953 Error Count: 0x1 00:28:30.953 Submission Queue Id: 0x0 00:28:30.953 Command Id: 0x4 00:28:30.953 Phase Bit: 0 00:28:30.953 Status Code: 0x2 00:28:30.953 Status Code Type: 0x0 00:28:30.953 Do Not Retry: 1 00:28:30.953 Error Location: 0x28 00:28:30.953 LBA: 0x0 00:28:30.953 Namespace: 0x0 00:28:30.953 Vendor Log Page: 0x0 00:28:30.953 00:28:30.953 Number of Queues 00:28:30.953 ================ 00:28:30.953 Number of I/O Submission Queues: 128 00:28:30.954 Number of I/O Completion Queues: 128 00:28:30.954 00:28:30.954 ZNS Specific Controller Data 00:28:30.954 ============================ 00:28:30.954 Zone Append Size Limit: 0 00:28:30.954 00:28:30.954 00:28:30.954 Active Namespaces 00:28:30.954 ================= 00:28:30.954 get_feature(0x05) failed 00:28:30.954 Namespace ID:1 00:28:30.954 Command Set Identifier: NVM (00h) 00:28:30.954 Deallocate: Supported 00:28:30.954 Deallocated/Unwritten Error: Not Supported 00:28:30.954 Deallocated Read Value: Unknown 00:28:30.954 Deallocate in Write Zeroes: Not Supported 00:28:30.954 Deallocated Guard Field: 0xFFFF 00:28:30.954 Flush: Supported 00:28:30.954 Reservation: Not Supported 00:28:30.954 Namespace Sharing Capabilities: Multiple Controllers 00:28:30.954 Size (in LBAs): 3750748848 (1788GiB) 00:28:30.954 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:30.954 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:30.954 UUID: 5f01ec0a-56df-4423-b7a4-18b5838abe05 00:28:30.954 Thin Provisioning: Not Supported 00:28:30.954 Per-NS Atomic Units: Yes 00:28:30.954 Atomic Write Unit (Normal): 8 00:28:30.954 Atomic Write Unit (PFail): 8 00:28:30.954 Preferred Write Granularity: 8 00:28:30.954 Atomic Compare & Write Unit: 8 00:28:30.954 Atomic Boundary Size (Normal): 0 00:28:30.954 Atomic Boundary Size (PFail): 0 00:28:30.954 Atomic Boundary Offset: 0 00:28:30.954 NGUID/EUI64 Never Reused: No 00:28:30.954 ANA group ID: 1 00:28:30.954 Namespace Write Protected: No 00:28:30.954 Number of LBA Formats: 1 00:28:30.954 Current LBA Format: LBA Format #00 00:28:30.954 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:30.954 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.954 rmmod nvme_tcp 00:28:30.954 rmmod nvme_fabrics 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.954 13:03:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.869 13:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:32.869 13:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:33.130 13:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:33.130 13:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:33.130 13:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:33.130 13:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:33.130 13:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:33.130 13:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:33.130 13:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:33.130 13:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:33.130 13:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:37.333 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:37.333 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:37.333 00:28:37.333 real 0m21.038s 00:28:37.333 user 0m5.712s 00:28:37.333 sys 0m12.372s 00:28:37.333 13:03:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.333 13:03:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:37.333 ************************************ 00:28:37.333 END TEST nvmf_identify_kernel_target 00:28:37.333 ************************************ 00:28:37.333 13:03:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:37.333 13:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:37.333 13:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.333 13:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.594 ************************************ 00:28:37.594 START TEST nvmf_auth_host 00:28:37.594 ************************************ 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:37.594 * Looking for test storage... 00:28:37.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:37.594 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:37.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.595 --rc genhtml_branch_coverage=1 00:28:37.595 --rc genhtml_function_coverage=1 00:28:37.595 --rc genhtml_legend=1 00:28:37.595 --rc geninfo_all_blocks=1 00:28:37.595 --rc geninfo_unexecuted_blocks=1 00:28:37.595 00:28:37.595 ' 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:37.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.595 --rc genhtml_branch_coverage=1 00:28:37.595 --rc genhtml_function_coverage=1 00:28:37.595 --rc genhtml_legend=1 00:28:37.595 --rc geninfo_all_blocks=1 00:28:37.595 --rc geninfo_unexecuted_blocks=1 00:28:37.595 00:28:37.595 ' 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:37.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.595 --rc genhtml_branch_coverage=1 00:28:37.595 --rc genhtml_function_coverage=1 00:28:37.595 --rc genhtml_legend=1 00:28:37.595 --rc geninfo_all_blocks=1 00:28:37.595 --rc geninfo_unexecuted_blocks=1 00:28:37.595 00:28:37.595 ' 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:37.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.595 --rc genhtml_branch_coverage=1 00:28:37.595 --rc genhtml_function_coverage=1 00:28:37.595 --rc genhtml_legend=1 00:28:37.595 --rc geninfo_all_blocks=1 00:28:37.595 --rc geninfo_unexecuted_blocks=1 00:28:37.595 00:28:37.595 ' 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:37.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.595 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.856 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:37.856 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:37.856 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:37.856 13:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:45.995 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:45.995 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:45.996 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:45.996 Found net devices under 0000:31:00.0: cvl_0_0 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:45.996 Found net devices under 0000:31:00.1: cvl_0_1 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:45.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:28:45.996 00:28:45.996 --- 10.0.0.2 ping statistics --- 00:28:45.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.996 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:28:45.996 00:28:45.996 --- 10.0.0.1 ping statistics --- 00:28:45.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.996 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=798764 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 798764 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 798764 ']' 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.996 13:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a1fccf0db59c275d6a88310600ffa281 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PXx 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a1fccf0db59c275d6a88310600ffa281 0 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a1fccf0db59c275d6a88310600ffa281 0 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a1fccf0db59c275d6a88310600ffa281 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PXx 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PXx 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.PXx 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:46.258 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=691e4ff1db69ebd60a902dc5d9f0bc13c523801bd186f3819239f42999db4127 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DrR 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 691e4ff1db69ebd60a902dc5d9f0bc13c523801bd186f3819239f42999db4127 3 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 691e4ff1db69ebd60a902dc5d9f0bc13c523801bd186f3819239f42999db4127 3 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=691e4ff1db69ebd60a902dc5d9f0bc13c523801bd186f3819239f42999db4127 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DrR 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DrR 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.DrR 00:28:46.518 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ade4819598d1d28e75aa28ea4fab0e5bbd2ad43e25357fe7 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dQW 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ade4819598d1d28e75aa28ea4fab0e5bbd2ad43e25357fe7 0 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ade4819598d1d28e75aa28ea4fab0e5bbd2ad43e25357fe7 0 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ade4819598d1d28e75aa28ea4fab0e5bbd2ad43e25357fe7 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dQW 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dQW 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.dQW 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c1c851b862e45c5a716e78db37257bc829429ed29efbf92a 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zwY 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c1c851b862e45c5a716e78db37257bc829429ed29efbf92a 2 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c1c851b862e45c5a716e78db37257bc829429ed29efbf92a 2 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c1c851b862e45c5a716e78db37257bc829429ed29efbf92a 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zwY 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zwY 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zwY 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7400b12c48ec3bf602c4dfc3ad51a727 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dBV 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7400b12c48ec3bf602c4dfc3ad51a727 1 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7400b12c48ec3bf602c4dfc3ad51a727 1 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7400b12c48ec3bf602c4dfc3ad51a727 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dBV 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dBV 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.dBV 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:46.519 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=67a9049553ce3d4d356295421fe8cb4f 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2aq 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 67a9049553ce3d4d356295421fe8cb4f 1 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 67a9049553ce3d4d356295421fe8cb4f 1 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=67a9049553ce3d4d356295421fe8cb4f 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2aq 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2aq 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.2aq 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b8c020edca7987b5d2956d49551c66af6107afb37240839f 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wbB 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b8c020edca7987b5d2956d49551c66af6107afb37240839f 2 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b8c020edca7987b5d2956d49551c66af6107afb37240839f 2 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b8c020edca7987b5d2956d49551c66af6107afb37240839f 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wbB 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wbB 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.wbB 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0d66cc24e091b8e078eb6ba638411f20 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.KhP 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0d66cc24e091b8e078eb6ba638411f20 0 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0d66cc24e091b8e078eb6ba638411f20 0 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0d66cc24e091b8e078eb6ba638411f20 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.KhP 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.KhP 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.KhP 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=28603f72aff37919907f838f8dcd7737d829456383ac6512dd2f75f2b4508a11 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.OJN 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 28603f72aff37919907f838f8dcd7737d829456383ac6512dd2f75f2b4508a11 3 00:28:46.781 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 28603f72aff37919907f838f8dcd7737d829456383ac6512dd2f75f2b4508a11 3 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=28603f72aff37919907f838f8dcd7737d829456383ac6512dd2f75f2b4508a11 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.OJN 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.OJN 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.OJN 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 798764 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 798764 ']' 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.782 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PXx 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.DrR ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DrR 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.dQW 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zwY ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zwY 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.dBV 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.2aq ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2aq 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.wbB 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.KhP ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.KhP 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.OJN 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:47.044 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:47.045 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:47.045 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:47.045 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:47.305 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:47.305 13:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:51.513 Waiting for block devices as requested 00:28:51.513 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:51.513 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:51.513 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:51.513 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:51.513 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:51.513 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:51.513 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:51.513 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:51.513 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:51.774 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:51.774 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:51.774 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:52.036 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:52.036 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:52.036 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:52.036 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:52.298 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:53.243 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:53.243 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:53.243 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:53.243 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:53.243 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:53.243 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:53.243 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:53.244 No valid GPT data, bailing 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:53.244 13:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:53.244 00:28:53.244 Discovery Log Number of Records 2, Generation counter 2 00:28:53.244 =====Discovery Log Entry 0====== 00:28:53.244 trtype: tcp 00:28:53.244 adrfam: ipv4 00:28:53.244 subtype: current discovery subsystem 00:28:53.244 treq: not specified, sq flow control disable supported 00:28:53.244 portid: 1 00:28:53.244 trsvcid: 4420 00:28:53.244 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:53.244 traddr: 10.0.0.1 00:28:53.244 eflags: none 00:28:53.244 sectype: none 00:28:53.244 =====Discovery Log Entry 1====== 00:28:53.244 trtype: tcp 00:28:53.244 adrfam: ipv4 00:28:53.244 subtype: nvme subsystem 00:28:53.244 treq: not specified, sq flow control disable supported 00:28:53.244 portid: 1 00:28:53.244 trsvcid: 4420 00:28:53.244 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:53.244 traddr: 10.0.0.1 00:28:53.244 eflags: none 00:28:53.244 sectype: none 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.244 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.506 nvme0n1 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.506 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.769 nvme0n1 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.769 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.030 nvme0n1 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:54.030 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.031 nvme0n1 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.031 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:54.292 13:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.292 nvme0n1 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.292 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.554 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.554 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.554 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.554 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.554 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.554 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.554 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:54.554 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.554 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.555 nvme0n1 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.555 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:28:54.817 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.818 nvme0n1 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.818 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.079 nvme0n1 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.079 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.340 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.340 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.340 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.340 13:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.340 nvme0n1 00:28:55.340 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.341 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.341 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.341 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.341 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.341 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.602 nvme0n1 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.602 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.869 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:55.870 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.870 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.870 nvme0n1 00:28:55.870 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.870 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.870 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.870 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.870 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.870 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.140 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.141 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.141 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.141 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:56.141 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.141 13:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.435 nvme0n1 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.435 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.732 nvme0n1 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.732 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.994 nvme0n1 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.994 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.255 13:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.517 nvme0n1 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.517 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.779 nvme0n1 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.779 13:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.352 nvme0n1 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.352 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.924 nvme0n1 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.924 13:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.497 nvme0n1 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.497 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.070 nvme0n1 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.070 13:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.331 nvme0n1 00:29:00.331 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.331 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.331 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.331 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.331 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.593 13:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.166 nvme0n1 00:29:01.166 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.166 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.166 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.166 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.166 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.427 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.371 nvme0n1 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:02.371 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.372 13:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.942 nvme0n1 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.942 13:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.883 nvme0n1 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.884 13:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.824 nvme0n1 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.824 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.825 nvme0n1 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.825 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.085 nvme0n1 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.085 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.086 13:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.345 nvme0n1 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:05.345 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.346 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.606 nvme0n1 00:29:05.606 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.607 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.868 nvme0n1 00:29:05.868 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.868 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.868 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.868 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.868 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.868 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.868 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.868 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.868 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.868 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.869 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.130 nvme0n1 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.130 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.131 13:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.392 nvme0n1 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:06.392 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.393 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.654 nvme0n1 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.654 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.915 nvme0n1 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.915 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.177 nvme0n1 00:29:07.177 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.178 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.178 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.178 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.178 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.178 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.178 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.178 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.178 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.178 13:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.178 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.438 nvme0n1 00:29:07.438 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.438 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.438 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.438 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.438 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.438 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.699 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.961 nvme0n1 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.961 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.962 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.962 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.962 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.962 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:07.962 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.962 13:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.223 nvme0n1 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.223 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.224 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.486 nvme0n1 00:29:08.486 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.486 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.486 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.486 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.486 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.747 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.008 nvme0n1 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.008 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.009 13:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.579 nvme0n1 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.579 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.580 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.580 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.580 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.580 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.580 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.580 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.580 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.580 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:09.580 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.580 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.151 nvme0n1 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.151 13:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.724 nvme0n1 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:10.724 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.725 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.297 nvme0n1 00:29:11.297 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.297 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.297 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.297 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.297 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.297 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.297 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.297 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.297 13:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.297 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.870 nvme0n1 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.870 13:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.441 nvme0n1 00:29:12.441 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.441 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.441 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.441 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.441 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.702 13:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.645 nvme0n1 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.645 13:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.216 nvme0n1 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.216 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.160 nvme0n1 00:29:15.160 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.160 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.160 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.160 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.160 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.160 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.160 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.161 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.161 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.161 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.161 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.161 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.161 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.162 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.163 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.163 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.163 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.163 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.163 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.163 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.163 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.163 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:15.163 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.163 13:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.107 nvme0n1 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.107 nvme0n1 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:16.107 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.108 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:16.108 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.108 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.108 13:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.108 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.369 nvme0n1 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.369 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.630 nvme0n1 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.630 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.891 nvme0n1 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:16.891 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.892 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.154 nvme0n1 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.154 13:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.416 nvme0n1 00:29:17.416 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.416 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.416 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.416 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.416 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.416 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.416 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.417 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.683 nvme0n1 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.683 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.943 nvme0n1 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:29:17.943 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.944 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.206 nvme0n1 00:29:18.206 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.206 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.206 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.206 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.206 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.206 13:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.206 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.207 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.468 nvme0n1 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.468 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.469 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.730 nvme0n1 00:29:18.730 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.730 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.730 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:18.730 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.730 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.730 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:18.991 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.992 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.253 nvme0n1 00:29:19.253 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.253 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.253 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.253 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.253 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.253 13:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.253 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.253 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.253 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.253 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.253 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.253 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.253 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:19.253 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.253 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:19.253 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:19.253 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.254 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.538 nvme0n1 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.538 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.798 nvme0n1 00:29:19.798 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.798 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.798 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:19.798 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.798 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.798 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.061 13:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.322 nvme0n1 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.322 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.893 nvme0n1 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:20.893 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.894 13:04:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.465 nvme0n1 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.465 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.034 nvme0n1 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.034 13:04:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 nvme0n1 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:22.603 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.604 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.175 nvme0n1 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTFmY2NmMGRiNTljMjc1ZDZhODgzMTA2MDBmZmEyODGOKhDp: 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: ]] 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjkxZTRmZjFkYjY5ZWJkNjBhOTAyZGM1ZDlmMGJjMTNjNTIzODAxYmQxODZmMzgxOTIzOWY0Mjk5OWRiNDEyN2Fr5Dg=: 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:23.175 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:23.176 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:23.176 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:23.176 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:23.176 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.176 13:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.746 nvme0n1 00:29:23.746 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.746 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:23.746 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:23.746 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.746 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.746 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.007 13:04:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.578 nvme0n1 00:29:24.578 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.578 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:24.578 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:24.578 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.578 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.578 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.838 13:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.408 nvme0n1 00:29:25.408 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.408 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:25.408 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:25.408 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.408 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.408 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:25.668 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjhjMDIwZWRjYTc5ODdiNWQyOTU2ZDQ5NTUxYzY2YWY2MTA3YWZiMzcyNDA4MzlmN5r+vg==: 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: ]] 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQ2NmNjMjRlMDkxYjhlMDc4ZWI2YmE2Mzg0MTFmMjBCAsW6: 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.669 13:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.239 nvme0n1 00:29:26.239 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.239 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:26.239 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:26.239 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.239 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.239 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjg2MDNmNzJhZmYzNzkxOTkwN2Y4MzhmOGRjZDc3MzdkODI5NDU2MzgzYWM2NTEyZGQyZjc1ZjJiNDUwOGExMSfk0MU=: 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.499 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.067 nvme0n1 00:29:27.067 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.067 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.067 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:27.067 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.067 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.067 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.327 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.327 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.327 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.327 13:04:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.327 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.327 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:27.327 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.327 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.328 request: 00:29:27.328 { 00:29:27.328 "name": "nvme0", 00:29:27.328 "trtype": "tcp", 00:29:27.328 "traddr": "10.0.0.1", 00:29:27.328 "adrfam": "ipv4", 00:29:27.328 "trsvcid": "4420", 00:29:27.328 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:27.328 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:27.328 "prchk_reftag": false, 00:29:27.328 "prchk_guard": false, 00:29:27.328 "hdgst": false, 00:29:27.328 "ddgst": false, 00:29:27.328 "allow_unrecognized_csi": false, 00:29:27.328 "method": "bdev_nvme_attach_controller", 00:29:27.328 "req_id": 1 00:29:27.328 } 00:29:27.328 Got JSON-RPC error response 00:29:27.328 response: 00:29:27.328 { 00:29:27.328 "code": -5, 00:29:27.328 "message": "Input/output error" 00:29:27.328 } 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.328 request: 00:29:27.328 { 00:29:27.328 "name": "nvme0", 00:29:27.328 "trtype": "tcp", 00:29:27.328 "traddr": "10.0.0.1", 00:29:27.328 "adrfam": "ipv4", 00:29:27.328 "trsvcid": "4420", 00:29:27.328 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:27.328 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:27.328 "prchk_reftag": false, 00:29:27.328 "prchk_guard": false, 00:29:27.328 "hdgst": false, 00:29:27.328 "ddgst": false, 00:29:27.328 "dhchap_key": "key2", 00:29:27.328 "allow_unrecognized_csi": false, 00:29:27.328 "method": "bdev_nvme_attach_controller", 00:29:27.328 "req_id": 1 00:29:27.328 } 00:29:27.328 Got JSON-RPC error response 00:29:27.328 response: 00:29:27.328 { 00:29:27.328 "code": -5, 00:29:27.328 "message": "Input/output error" 00:29:27.328 } 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:27.328 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.588 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.588 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:27.588 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.589 request: 00:29:27.589 { 00:29:27.589 "name": "nvme0", 00:29:27.589 "trtype": "tcp", 00:29:27.589 "traddr": "10.0.0.1", 00:29:27.589 "adrfam": "ipv4", 00:29:27.589 "trsvcid": "4420", 00:29:27.589 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:27.589 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:27.589 "prchk_reftag": false, 00:29:27.589 "prchk_guard": false, 00:29:27.589 "hdgst": false, 00:29:27.589 "ddgst": false, 00:29:27.589 "dhchap_key": "key1", 00:29:27.589 "dhchap_ctrlr_key": "ckey2", 00:29:27.589 "allow_unrecognized_csi": false, 00:29:27.589 "method": "bdev_nvme_attach_controller", 00:29:27.589 "req_id": 1 00:29:27.589 } 00:29:27.589 Got JSON-RPC error response 00:29:27.589 response: 00:29:27.589 { 00:29:27.589 "code": -5, 00:29:27.589 "message": "Input/output error" 00:29:27.589 } 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.589 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.850 nvme0n1 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.850 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.851 request: 00:29:27.851 { 00:29:27.851 "name": "nvme0", 00:29:27.851 "dhchap_key": "key1", 00:29:27.851 "dhchap_ctrlr_key": "ckey2", 00:29:27.851 "method": "bdev_nvme_set_keys", 00:29:27.851 "req_id": 1 00:29:27.851 } 00:29:27.851 Got JSON-RPC error response 00:29:27.851 response: 00:29:27.851 { 00:29:27.851 "code": -13, 00:29:27.851 "message": "Permission denied" 00:29:27.851 } 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:27.851 13:04:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:29.237 13:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:29.237 13:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:29.237 13:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.237 13:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.237 13:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.237 13:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:29.237 13:04:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:30.179 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRlNDgxOTU5OGQxZDI4ZTc1YWEyOGVhNGZhYjBlNWJiZDJhZDQzZTI1MzU3ZmU3jhmB1A==: 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: ]] 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzFjODUxYjg2MmU0NWM1YTcxNmU3OGRiMzcyNTdiYzgyOTQyOWVkMjllZmJmOTJh0NhkPg==: 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.180 13:04:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.180 nvme0n1 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzQwMGIxMmM0OGVjM2JmNjAyYzRkZmMzYWQ1MWE3MjcrfZ3I: 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: ]] 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjdhOTA0OTU1M2NlM2Q0ZDM1NjI5NTQyMWZlOGNiNGYd2ojw: 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.180 request: 00:29:30.180 { 00:29:30.180 "name": "nvme0", 00:29:30.180 "dhchap_key": "key2", 00:29:30.180 "dhchap_ctrlr_key": "ckey1", 00:29:30.180 "method": "bdev_nvme_set_keys", 00:29:30.180 "req_id": 1 00:29:30.180 } 00:29:30.180 Got JSON-RPC error response 00:29:30.180 response: 00:29:30.180 { 00:29:30.180 "code": -13, 00:29:30.180 "message": "Permission denied" 00:29:30.180 } 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:30.180 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:30.440 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.440 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:30.440 13:04:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:31.385 rmmod nvme_tcp 00:29:31.385 rmmod nvme_fabrics 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 798764 ']' 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 798764 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 798764 ']' 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 798764 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.385 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 798764 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 798764' 00:29:31.647 killing process with pid 798764 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 798764 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 798764 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.647 13:04:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.200 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.200 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:34.200 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:34.200 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:34.200 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:34.200 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:34.201 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:34.201 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:34.201 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:34.201 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:34.201 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:34.201 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:34.201 13:04:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:37.631 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:37.631 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:37.909 13:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.PXx /tmp/spdk.key-null.dQW /tmp/spdk.key-sha256.dBV /tmp/spdk.key-sha384.wbB /tmp/spdk.key-sha512.OJN /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:37.909 13:04:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:42.111 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:42.111 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:42.111 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:42.372 00:29:42.372 real 1m4.791s 00:29:42.372 user 0m57.241s 00:29:42.372 sys 0m17.304s 00:29:42.372 13:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.372 13:04:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.372 ************************************ 00:29:42.372 END TEST nvmf_auth_host 00:29:42.372 ************************************ 00:29:42.372 13:04:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:42.372 13:04:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:42.372 13:04:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:42.372 13:04:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.372 13:04:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.372 ************************************ 00:29:42.372 START TEST nvmf_digest 00:29:42.372 ************************************ 00:29:42.372 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:42.372 * Looking for test storage... 00:29:42.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:42.372 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:42.372 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:42.372 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:42.634 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:42.634 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.634 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.634 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:42.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.635 --rc genhtml_branch_coverage=1 00:29:42.635 --rc genhtml_function_coverage=1 00:29:42.635 --rc genhtml_legend=1 00:29:42.635 --rc geninfo_all_blocks=1 00:29:42.635 --rc geninfo_unexecuted_blocks=1 00:29:42.635 00:29:42.635 ' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:42.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.635 --rc genhtml_branch_coverage=1 00:29:42.635 --rc genhtml_function_coverage=1 00:29:42.635 --rc genhtml_legend=1 00:29:42.635 --rc geninfo_all_blocks=1 00:29:42.635 --rc geninfo_unexecuted_blocks=1 00:29:42.635 00:29:42.635 ' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:42.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.635 --rc genhtml_branch_coverage=1 00:29:42.635 --rc genhtml_function_coverage=1 00:29:42.635 --rc genhtml_legend=1 00:29:42.635 --rc geninfo_all_blocks=1 00:29:42.635 --rc geninfo_unexecuted_blocks=1 00:29:42.635 00:29:42.635 ' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:42.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.635 --rc genhtml_branch_coverage=1 00:29:42.635 --rc genhtml_function_coverage=1 00:29:42.635 --rc genhtml_legend=1 00:29:42.635 --rc geninfo_all_blocks=1 00:29:42.635 --rc geninfo_unexecuted_blocks=1 00:29:42.635 00:29:42.635 ' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:42.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.635 13:04:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:50.774 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:50.774 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:50.774 Found net devices under 0000:31:00.0: cvl_0_0 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:50.774 Found net devices under 0000:31:00.1: cvl_0_1 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.774 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:51.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:29:51.035 00:29:51.035 --- 10.0.0.2 ping statistics --- 00:29:51.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.035 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:29:51.035 00:29:51.035 --- 10.0.0.1 ping statistics --- 00:29:51.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.035 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:51.035 ************************************ 00:29:51.035 START TEST nvmf_digest_clean 00:29:51.035 ************************************ 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=817143 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 817143 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 817143 ']' 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.035 13:04:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:51.295 [2024-11-25 13:04:30.959928] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:29:51.295 [2024-11-25 13:04:30.959992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.295 [2024-11-25 13:04:31.052615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.295 [2024-11-25 13:04:31.093817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.295 [2024-11-25 13:04:31.093855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.295 [2024-11-25 13:04:31.093868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.295 [2024-11-25 13:04:31.093877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.295 [2024-11-25 13:04:31.093883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.295 [2024-11-25 13:04:31.094489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.865 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.865 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:51.865 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.865 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.866 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:52.126 null0 00:29:52.126 [2024-11-25 13:04:31.870714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.126 [2024-11-25 13:04:31.894919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=817490 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 817490 /var/tmp/bperf.sock 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 817490 ']' 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:52.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.126 13:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:52.126 [2024-11-25 13:04:31.953739] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:29:52.126 [2024-11-25 13:04:31.953788] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid817490 ] 00:29:52.387 [2024-11-25 13:04:32.048832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.387 [2024-11-25 13:04:32.084852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.958 13:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.958 13:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:52.958 13:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:52.958 13:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:52.958 13:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:53.218 13:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.218 13:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.479 nvme0n1 00:29:53.479 13:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:53.479 13:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:53.479 Running I/O for 2 seconds... 00:29:55.803 19764.00 IOPS, 77.20 MiB/s [2024-11-25T12:04:35.706Z] 19674.00 IOPS, 76.85 MiB/s 00:29:55.803 Latency(us) 00:29:55.803 [2024-11-25T12:04:35.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.803 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:55.803 nvme0n1 : 2.01 19689.71 76.91 0.00 0.00 6491.61 2798.93 21845.33 00:29:55.803 [2024-11-25T12:04:35.706Z] =================================================================================================================== 00:29:55.803 [2024-11-25T12:04:35.706Z] Total : 19689.71 76.91 0.00 0.00 6491.61 2798.93 21845.33 00:29:55.803 { 00:29:55.803 "results": [ 00:29:55.803 { 00:29:55.803 "job": "nvme0n1", 00:29:55.803 "core_mask": "0x2", 00:29:55.803 "workload": "randread", 00:29:55.803 "status": "finished", 00:29:55.803 "queue_depth": 128, 00:29:55.803 "io_size": 4096, 00:29:55.803 "runtime": 2.006835, 00:29:55.803 "iops": 19689.710414657908, 00:29:55.803 "mibps": 76.91293130725745, 00:29:55.803 "io_failed": 0, 00:29:55.803 "io_timeout": 0, 00:29:55.803 "avg_latency_us": 6491.610259317373, 00:29:55.803 "min_latency_us": 2798.9333333333334, 00:29:55.803 "max_latency_us": 21845.333333333332 00:29:55.803 } 00:29:55.803 ], 00:29:55.803 "core_count": 1 00:29:55.803 } 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:55.803 | select(.opcode=="crc32c") 00:29:55.803 | "\(.module_name) \(.executed)"' 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 817490 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 817490 ']' 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 817490 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 817490 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 817490' 00:29:55.803 killing process with pid 817490 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 817490 00:29:55.803 Received shutdown signal, test time was about 2.000000 seconds 00:29:55.803 00:29:55.803 Latency(us) 00:29:55.803 [2024-11-25T12:04:35.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.803 [2024-11-25T12:04:35.706Z] =================================================================================================================== 00:29:55.803 [2024-11-25T12:04:35.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:55.803 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 817490 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=818176 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 818176 /var/tmp/bperf.sock 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 818176 ']' 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:56.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:56.064 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.065 13:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:56.065 [2024-11-25 13:04:35.781842] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:29:56.065 [2024-11-25 13:04:35.781902] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818176 ] 00:29:56.065 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:56.065 Zero copy mechanism will not be used. 00:29:56.065 [2024-11-25 13:04:35.871626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.065 [2024-11-25 13:04:35.899138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.009 13:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.009 13:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:57.009 13:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:57.009 13:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:57.009 13:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:57.009 13:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:57.009 13:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:57.270 nvme0n1 00:29:57.270 13:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:57.270 13:04:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:57.270 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:57.270 Zero copy mechanism will not be used. 00:29:57.270 Running I/O for 2 seconds... 00:29:59.597 3303.00 IOPS, 412.88 MiB/s [2024-11-25T12:04:39.500Z] 3241.50 IOPS, 405.19 MiB/s 00:29:59.597 Latency(us) 00:29:59.597 [2024-11-25T12:04:39.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.597 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:59.597 nvme0n1 : 2.01 3240.71 405.09 0.00 0.00 4934.35 843.09 7864.32 00:29:59.597 [2024-11-25T12:04:39.500Z] =================================================================================================================== 00:29:59.597 [2024-11-25T12:04:39.500Z] Total : 3240.71 405.09 0.00 0.00 4934.35 843.09 7864.32 00:29:59.597 { 00:29:59.597 "results": [ 00:29:59.597 { 00:29:59.597 "job": "nvme0n1", 00:29:59.597 "core_mask": "0x2", 00:29:59.597 "workload": "randread", 00:29:59.597 "status": "finished", 00:29:59.597 "queue_depth": 16, 00:29:59.597 "io_size": 131072, 00:29:59.597 "runtime": 2.005423, 00:29:59.597 "iops": 3240.7128072232144, 00:29:59.597 "mibps": 405.0891009029018, 00:29:59.597 "io_failed": 0, 00:29:59.597 "io_timeout": 0, 00:29:59.597 "avg_latency_us": 4934.354635072063, 00:29:59.597 "min_latency_us": 843.0933333333334, 00:29:59.597 "max_latency_us": 7864.32 00:29:59.597 } 00:29:59.597 ], 00:29:59.597 "core_count": 1 00:29:59.597 } 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:59.597 | select(.opcode=="crc32c") 00:29:59.597 | "\(.module_name) \(.executed)"' 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 818176 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 818176 ']' 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 818176 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 818176 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 818176' 00:29:59.597 killing process with pid 818176 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 818176 00:29:59.597 Received shutdown signal, test time was about 2.000000 seconds 00:29:59.597 00:29:59.597 Latency(us) 00:29:59.597 [2024-11-25T12:04:39.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.597 [2024-11-25T12:04:39.500Z] =================================================================================================================== 00:29:59.597 [2024-11-25T12:04:39.500Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 818176 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=818858 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 818858 /var/tmp/bperf.sock 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 818858 ']' 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:59.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:59.597 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.598 13:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:59.858 [2024-11-25 13:04:39.516368] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:29:59.858 [2024-11-25 13:04:39.516427] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid818858 ] 00:29:59.858 [2024-11-25 13:04:39.604237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.858 [2024-11-25 13:04:39.633909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.429 13:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.429 13:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:00.429 13:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:00.429 13:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:00.429 13:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:00.690 13:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:00.690 13:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:01.262 nvme0n1 00:30:01.262 13:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:01.262 13:04:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:01.262 Running I/O for 2 seconds... 00:30:03.148 21631.00 IOPS, 84.50 MiB/s [2024-11-25T12:04:43.051Z] 21635.00 IOPS, 84.51 MiB/s 00:30:03.148 Latency(us) 00:30:03.148 [2024-11-25T12:04:43.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.148 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:03.148 nvme0n1 : 2.00 21656.63 84.60 0.00 0.00 5903.91 3072.00 10267.31 00:30:03.148 [2024-11-25T12:04:43.051Z] =================================================================================================================== 00:30:03.148 [2024-11-25T12:04:43.051Z] Total : 21656.63 84.60 0.00 0.00 5903.91 3072.00 10267.31 00:30:03.148 { 00:30:03.148 "results": [ 00:30:03.148 { 00:30:03.148 "job": "nvme0n1", 00:30:03.148 "core_mask": "0x2", 00:30:03.148 "workload": "randwrite", 00:30:03.148 "status": "finished", 00:30:03.148 "queue_depth": 128, 00:30:03.148 "io_size": 4096, 00:30:03.148 "runtime": 2.003913, 00:30:03.148 "iops": 21656.628805741566, 00:30:03.148 "mibps": 84.59620627242799, 00:30:03.148 "io_failed": 0, 00:30:03.148 "io_timeout": 0, 00:30:03.148 "avg_latency_us": 5903.910956265266, 00:30:03.148 "min_latency_us": 3072.0, 00:30:03.148 "max_latency_us": 10267.306666666667 00:30:03.148 } 00:30:03.148 ], 00:30:03.148 "core_count": 1 00:30:03.148 } 00:30:03.148 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:03.148 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:03.148 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:03.148 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:03.148 | select(.opcode=="crc32c") 00:30:03.148 | "\(.module_name) \(.executed)"' 00:30:03.148 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 818858 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 818858 ']' 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 818858 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 818858 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 818858' 00:30:03.409 killing process with pid 818858 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 818858 00:30:03.409 Received shutdown signal, test time was about 2.000000 seconds 00:30:03.409 00:30:03.409 Latency(us) 00:30:03.409 [2024-11-25T12:04:43.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.409 [2024-11-25T12:04:43.312Z] =================================================================================================================== 00:30:03.409 [2024-11-25T12:04:43.312Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:03.409 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 818858 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=819544 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 819544 /var/tmp/bperf.sock 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 819544 ']' 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:03.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.671 13:04:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:03.671 [2024-11-25 13:04:43.401566] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:30:03.671 [2024-11-25 13:04:43.401624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid819544 ] 00:30:03.671 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:03.671 Zero copy mechanism will not be used. 00:30:03.671 [2024-11-25 13:04:43.491424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.671 [2024-11-25 13:04:43.519177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.614 13:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.614 13:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:04.614 13:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:04.614 13:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:04.614 13:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:04.614 13:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:04.614 13:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:04.874 nvme0n1 00:30:04.874 13:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:04.874 13:04:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:04.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:04.874 Zero copy mechanism will not be used. 00:30:04.874 Running I/O for 2 seconds... 00:30:07.228 4368.00 IOPS, 546.00 MiB/s [2024-11-25T12:04:47.131Z] 4890.00 IOPS, 611.25 MiB/s 00:30:07.228 Latency(us) 00:30:07.228 [2024-11-25T12:04:47.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.228 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:07.228 nvme0n1 : 2.01 4885.52 610.69 0.00 0.00 3268.83 1529.17 10649.60 00:30:07.228 [2024-11-25T12:04:47.131Z] =================================================================================================================== 00:30:07.228 [2024-11-25T12:04:47.131Z] Total : 4885.52 610.69 0.00 0.00 3268.83 1529.17 10649.60 00:30:07.228 { 00:30:07.228 "results": [ 00:30:07.228 { 00:30:07.228 "job": "nvme0n1", 00:30:07.228 "core_mask": "0x2", 00:30:07.228 "workload": "randwrite", 00:30:07.228 "status": "finished", 00:30:07.228 "queue_depth": 16, 00:30:07.229 "io_size": 131072, 00:30:07.229 "runtime": 2.005107, 00:30:07.229 "iops": 4885.524812391558, 00:30:07.229 "mibps": 610.6906015489448, 00:30:07.229 "io_failed": 0, 00:30:07.229 "io_timeout": 0, 00:30:07.229 "avg_latency_us": 3268.828005988839, 00:30:07.229 "min_latency_us": 1529.1733333333334, 00:30:07.229 "max_latency_us": 10649.6 00:30:07.229 } 00:30:07.229 ], 00:30:07.229 "core_count": 1 00:30:07.229 } 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:07.229 | select(.opcode=="crc32c") 00:30:07.229 | "\(.module_name) \(.executed)"' 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 819544 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 819544 ']' 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 819544 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.229 13:04:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 819544 00:30:07.229 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:07.229 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:07.229 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 819544' 00:30:07.229 killing process with pid 819544 00:30:07.229 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 819544 00:30:07.229 Received shutdown signal, test time was about 2.000000 seconds 00:30:07.229 00:30:07.229 Latency(us) 00:30:07.229 [2024-11-25T12:04:47.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.229 [2024-11-25T12:04:47.132Z] =================================================================================================================== 00:30:07.229 [2024-11-25T12:04:47.132Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:07.229 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 819544 00:30:07.229 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 817143 00:30:07.229 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 817143 ']' 00:30:07.229 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 817143 00:30:07.229 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 817143 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 817143' 00:30:07.490 killing process with pid 817143 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 817143 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 817143 00:30:07.490 00:30:07.490 real 0m16.418s 00:30:07.490 user 0m32.510s 00:30:07.490 sys 0m3.521s 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:07.490 ************************************ 00:30:07.490 END TEST nvmf_digest_clean 00:30:07.490 ************************************ 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:07.490 ************************************ 00:30:07.490 START TEST nvmf_digest_error 00:30:07.490 ************************************ 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:07.490 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:07.751 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=820392 00:30:07.751 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 820392 00:30:07.751 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:07.751 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 820392 ']' 00:30:07.751 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.751 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.751 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.751 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.751 13:04:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:07.751 [2024-11-25 13:04:47.451305] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:30:07.751 [2024-11-25 13:04:47.451358] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.751 [2024-11-25 13:04:47.538837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.751 [2024-11-25 13:04:47.576889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.751 [2024-11-25 13:04:47.576926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.751 [2024-11-25 13:04:47.576936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.751 [2024-11-25 13:04:47.576944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.751 [2024-11-25 13:04:47.576951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.751 [2024-11-25 13:04:47.577547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:08.694 [2024-11-25 13:04:48.279564] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:08.694 null0 00:30:08.694 [2024-11-25 13:04:48.361715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.694 [2024-11-25 13:04:48.385918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=820601 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 820601 /var/tmp/bperf.sock 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 820601 ']' 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:08.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.694 13:04:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:08.694 [2024-11-25 13:04:48.442963] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:30:08.694 [2024-11-25 13:04:48.443012] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid820601 ] 00:30:08.694 [2024-11-25 13:04:48.533947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.694 [2024-11-25 13:04:48.563785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.637 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.637 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:09.637 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:09.637 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:09.637 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:09.637 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.637 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:09.637 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.637 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:09.637 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:09.899 nvme0n1 00:30:09.899 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:09.899 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.899 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:09.899 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.899 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:09.899 13:04:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:09.899 Running I/O for 2 seconds... 00:30:09.899 [2024-11-25 13:04:49.785233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:09.899 [2024-11-25 13:04:49.785263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.899 [2024-11-25 13:04:49.785272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:09.899 [2024-11-25 13:04:49.797656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:09.899 [2024-11-25 13:04:49.797676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.899 [2024-11-25 13:04:49.797683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.809659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.809678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.809685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.822833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.822851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.822858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.834918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.834936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.834943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.847203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.847221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.847233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.859932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.859950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.859957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.872360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.872378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.872384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.886135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.886153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.886160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.898508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.898526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.898532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.911555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.911573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.911579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.922315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.922333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.922339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.935526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.935543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.935550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.948414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.948432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.948440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.958874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.958895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.958902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.972511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.972529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.972535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.985115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.985133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.985139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.161 [2024-11-25 13:04:49.997479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.161 [2024-11-25 13:04:49.997496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.161 [2024-11-25 13:04:49.997503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.162 [2024-11-25 13:04:50.011091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.162 [2024-11-25 13:04:50.011109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.162 [2024-11-25 13:04:50.011115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.162 [2024-11-25 13:04:50.022850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.162 [2024-11-25 13:04:50.022872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.162 [2024-11-25 13:04:50.022879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.162 [2024-11-25 13:04:50.036226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.162 [2024-11-25 13:04:50.036243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.162 [2024-11-25 13:04:50.036250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.162 [2024-11-25 13:04:50.049628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.162 [2024-11-25 13:04:50.049646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.162 [2024-11-25 13:04:50.049653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.162 [2024-11-25 13:04:50.060903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.162 [2024-11-25 13:04:50.060920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.162 [2024-11-25 13:04:50.060927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.073058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.073076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.073082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.086655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.086672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.086678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.097222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.097240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.097247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.110424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.110442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.110449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.123539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.123557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.123564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.137312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.137330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.137337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.150346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.150364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.150371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.162803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.162820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.162827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.173885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.173903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.173913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.187347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.187365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.187372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.199571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.199589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.199596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.212736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.212753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.212760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.225399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.225416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.225423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.236879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.236897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.236903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.250430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.250448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.250454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.262374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.262391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.262398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.274576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.274593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.274600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.287411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.287431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.287438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.301030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.301047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.424 [2024-11-25 13:04:50.301054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.424 [2024-11-25 13:04:50.314374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.424 [2024-11-25 13:04:50.314391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.425 [2024-11-25 13:04:50.314398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.425 [2024-11-25 13:04:50.324951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.425 [2024-11-25 13:04:50.324969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.425 [2024-11-25 13:04:50.324975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.337658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.337676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.337683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.351439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.351457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.351463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.364501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.364519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.364525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.375471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.375489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.375496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.389501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.389518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.389525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.402605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.402623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.402629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.415022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.415039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.415046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.425541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.425558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.425565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.438777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.438794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.438801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.452516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.452534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.452540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.466040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.466058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.466065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.476003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.476020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.476026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.489870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.489888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.489894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.501942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.501960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.501969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.515315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.515333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.515340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.528296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.528313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.528320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.539494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.539512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.539518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.553268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.553285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.553292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.566180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.566197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.566203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.687 [2024-11-25 13:04:50.579375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.687 [2024-11-25 13:04:50.579393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.687 [2024-11-25 13:04:50.579399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.591457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.591474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.591481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.601789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.601807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.601813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.614999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.615016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.615023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.629201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.629219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.629226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.642680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.642698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.642705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.655286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.655304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.655311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.665999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.666017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.666024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.679896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.679914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.679922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.693284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.693303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.693312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.704036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.704054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.704062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.718268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.718287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.718297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.730884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.730902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.730908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.742740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.742757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.742764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.755950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.755968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.755974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.767699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.767717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.767723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 20149.00 IOPS, 78.71 MiB/s [2024-11-25T12:04:50.853Z] [2024-11-25 13:04:50.778871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.778890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.778896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.793618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.793636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.793643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.803199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.803217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.803223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.817792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.950 [2024-11-25 13:04:50.817810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.950 [2024-11-25 13:04:50.817817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.950 [2024-11-25 13:04:50.830908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.951 [2024-11-25 13:04:50.830929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-11-25 13:04:50.830936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:10.951 [2024-11-25 13:04:50.844777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:10.951 [2024-11-25 13:04:50.844795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.951 [2024-11-25 13:04:50.844801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.856664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.856682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.856689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.868045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.868063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.868069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.882573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.882591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.882598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.896809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.896827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.896834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.906657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.906675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.906681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.921454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.921471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.921478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.934876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.934894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.934901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.946046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.946063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.946070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.958226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.958243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.958250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.969891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.969909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.969916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.983119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.983136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.983143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:50.997747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:50.997765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:50.997772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:51.010743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:51.010761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:51.010768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:51.023194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:51.023212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.212 [2024-11-25 13:04:51.023218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.212 [2024-11-25 13:04:51.034045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.212 [2024-11-25 13:04:51.034063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.213 [2024-11-25 13:04:51.034070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.213 [2024-11-25 13:04:51.047467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.213 [2024-11-25 13:04:51.047485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.213 [2024-11-25 13:04:51.047494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.213 [2024-11-25 13:04:51.061196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.213 [2024-11-25 13:04:51.061214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.213 [2024-11-25 13:04:51.061220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.213 [2024-11-25 13:04:51.074428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.213 [2024-11-25 13:04:51.074446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.213 [2024-11-25 13:04:51.074452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.213 [2024-11-25 13:04:51.085861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.213 [2024-11-25 13:04:51.085881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.213 [2024-11-25 13:04:51.085888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.213 [2024-11-25 13:04:51.099105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.213 [2024-11-25 13:04:51.099124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.213 [2024-11-25 13:04:51.099131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.213 [2024-11-25 13:04:51.111470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.213 [2024-11-25 13:04:51.111488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.213 [2024-11-25 13:04:51.111495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.123599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.123617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.123624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.136774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.136792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.136799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.150412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.150430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.150436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.160229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.160249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.160256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.173818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.173836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.173842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.187569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.187587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.187594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.199028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.199046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.199052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.212356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.212374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.212380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.224400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.224418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.224425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.236453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.236471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.236477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.248693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.248710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.248717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.263003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.263022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.263028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.275291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.275309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.275316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.288208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.288226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.288232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.297554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.297572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.297578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.311419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.311437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.311444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.323940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.323958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.323965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.337559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.337577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.337584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.349937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.349955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.349961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.475 [2024-11-25 13:04:51.363943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.475 [2024-11-25 13:04:51.363961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.475 [2024-11-25 13:04:51.363968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.476 [2024-11-25 13:04:51.375918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.476 [2024-11-25 13:04:51.375937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.476 [2024-11-25 13:04:51.375949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.387288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.387306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.387313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.399588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.399605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.399611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.413486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.413504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.413511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.425697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.425714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.425721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.437999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.438017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.438023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.451685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.451703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.451709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.464707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.464724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.464730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.477219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.477237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.477243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.488789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.488807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.488813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.502835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.502852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.502859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.515487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.515504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.515511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.528446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.528463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.528470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.541381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.541398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.541404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.552750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.552768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.552774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.738 [2024-11-25 13:04:51.564688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.738 [2024-11-25 13:04:51.564706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.738 [2024-11-25 13:04:51.564712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.739 [2024-11-25 13:04:51.576652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.739 [2024-11-25 13:04:51.576669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.739 [2024-11-25 13:04:51.576675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.739 [2024-11-25 13:04:51.589561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.739 [2024-11-25 13:04:51.589578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.739 [2024-11-25 13:04:51.589588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.739 [2024-11-25 13:04:51.603439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.739 [2024-11-25 13:04:51.603457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.739 [2024-11-25 13:04:51.603463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.739 [2024-11-25 13:04:51.615644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.739 [2024-11-25 13:04:51.615661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.739 [2024-11-25 13:04:51.615668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:11.739 [2024-11-25 13:04:51.629687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:11.739 [2024-11-25 13:04:51.629704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.739 [2024-11-25 13:04:51.629711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 [2024-11-25 13:04:51.640584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:12.001 [2024-11-25 13:04:51.640602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.001 [2024-11-25 13:04:51.640609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 [2024-11-25 13:04:51.655097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:12.001 [2024-11-25 13:04:51.655114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.001 [2024-11-25 13:04:51.655121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 [2024-11-25 13:04:51.667706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:12.001 [2024-11-25 13:04:51.667724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.001 [2024-11-25 13:04:51.667730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 [2024-11-25 13:04:51.680697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:12.001 [2024-11-25 13:04:51.680714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.001 [2024-11-25 13:04:51.680721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 [2024-11-25 13:04:51.691057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:12.001 [2024-11-25 13:04:51.691075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.001 [2024-11-25 13:04:51.691081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 [2024-11-25 13:04:51.704134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:12.001 [2024-11-25 13:04:51.704154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.001 [2024-11-25 13:04:51.704161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 [2024-11-25 13:04:51.718197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:12.001 [2024-11-25 13:04:51.718214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.001 [2024-11-25 13:04:51.718221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 [2024-11-25 13:04:51.730726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:12.001 [2024-11-25 13:04:51.730744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.001 [2024-11-25 13:04:51.730751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 [2024-11-25 13:04:51.744255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:12.001 [2024-11-25 13:04:51.744273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.001 [2024-11-25 13:04:51.744280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 [2024-11-25 13:04:51.757996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:12.001 [2024-11-25 13:04:51.758013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.001 [2024-11-25 13:04:51.758019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 [2024-11-25 13:04:51.769287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd05410) 00:30:12.001 [2024-11-25 13:04:51.769304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.001 [2024-11-25 13:04:51.769311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:12.001 20148.50 IOPS, 78.71 MiB/s 00:30:12.001 Latency(us) 00:30:12.001 [2024-11-25T12:04:51.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.001 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:12.001 nvme0n1 : 2.00 20182.11 78.84 0.00 0.00 6337.36 2252.80 16165.55 00:30:12.001 [2024-11-25T12:04:51.904Z] =================================================================================================================== 00:30:12.001 [2024-11-25T12:04:51.904Z] Total : 20182.11 78.84 0.00 0.00 6337.36 2252.80 16165.55 00:30:12.001 { 00:30:12.001 "results": [ 00:30:12.001 { 00:30:12.001 "job": "nvme0n1", 00:30:12.001 "core_mask": "0x2", 00:30:12.001 "workload": "randread", 00:30:12.001 "status": "finished", 00:30:12.001 "queue_depth": 128, 00:30:12.001 "io_size": 4096, 00:30:12.001 "runtime": 2.003012, 00:30:12.001 "iops": 20182.105748742393, 00:30:12.001 "mibps": 78.83635058102497, 00:30:12.001 "io_failed": 0, 00:30:12.001 "io_timeout": 0, 00:30:12.001 "avg_latency_us": 6337.356699732014, 00:30:12.001 "min_latency_us": 2252.8, 00:30:12.001 "max_latency_us": 16165.546666666667 00:30:12.001 } 00:30:12.001 ], 00:30:12.001 "core_count": 1 00:30:12.001 } 00:30:12.001 13:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:12.001 13:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:12.001 13:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:12.001 | .driver_specific 00:30:12.001 | .nvme_error 00:30:12.001 | .status_code 00:30:12.001 | .command_transient_transport_error' 00:30:12.001 13:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:12.262 13:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:30:12.262 13:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 820601 00:30:12.262 13:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 820601 ']' 00:30:12.262 13:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 820601 00:30:12.262 13:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:12.262 13:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:12.262 13:04:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 820601 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 820601' 00:30:12.262 killing process with pid 820601 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 820601 00:30:12.262 Received shutdown signal, test time was about 2.000000 seconds 00:30:12.262 00:30:12.262 Latency(us) 00:30:12.262 [2024-11-25T12:04:52.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.262 [2024-11-25T12:04:52.165Z] =================================================================================================================== 00:30:12.262 [2024-11-25T12:04:52.165Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 820601 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=821285 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 821285 /var/tmp/bperf.sock 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 821285 ']' 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.262 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:12.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:12.263 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.263 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:12.523 [2024-11-25 13:04:52.204322] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:30:12.523 [2024-11-25 13:04:52.204392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821285 ] 00:30:12.523 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:12.523 Zero copy mechanism will not be used. 00:30:12.523 [2024-11-25 13:04:52.294759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.523 [2024-11-25 13:04:52.324064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.096 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.096 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:13.096 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:13.096 13:04:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:13.357 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:13.357 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.357 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.357 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.357 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.357 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.617 nvme0n1 00:30:13.617 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:13.617 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.617 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.617 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.617 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:13.617 13:04:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:13.617 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:13.617 Zero copy mechanism will not be used. 00:30:13.617 Running I/O for 2 seconds... 00:30:13.617 [2024-11-25 13:04:53.489176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.617 [2024-11-25 13:04:53.489209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.617 [2024-11-25 13:04:53.489218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.617 [2024-11-25 13:04:53.499298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.617 [2024-11-25 13:04:53.499324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.617 [2024-11-25 13:04:53.499332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.617 [2024-11-25 13:04:53.508753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.617 [2024-11-25 13:04:53.508774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.617 [2024-11-25 13:04:53.508781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.879 [2024-11-25 13:04:53.520286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.879 [2024-11-25 13:04:53.520307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.520314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.531425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.531445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.531451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.544390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.544411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.544418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.555882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.555901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.555908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.566564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.566583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.566590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.578380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.578399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.578405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.590889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.590908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.590915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.602826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.602844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.602855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.613549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.613567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.613573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.620269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.620288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.620295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.628342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.628361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.628368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.638207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.638225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.638232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.645854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.645878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.645884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.655535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.655555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.655561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.663273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.663292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.663299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.673576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.673595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.673601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.683110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.683132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.683139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.693164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.693183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.693189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.702218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.702237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.702244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.714556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.714574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.714581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.725595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.725614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.725621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.737151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.737170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.737176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.748075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.748094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.748101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.759322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.759341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.759347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.770353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.770371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.770385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:13.880 [2024-11-25 13:04:53.779583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:13.880 [2024-11-25 13:04:53.779602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.880 [2024-11-25 13:04:53.779609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.142 [2024-11-25 13:04:53.787797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.142 [2024-11-25 13:04:53.787817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.142 [2024-11-25 13:04:53.787824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.142 [2024-11-25 13:04:53.797748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.142 [2024-11-25 13:04:53.797767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.142 [2024-11-25 13:04:53.797774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.142 [2024-11-25 13:04:53.806100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.142 [2024-11-25 13:04:53.806119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.142 [2024-11-25 13:04:53.806125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.142 [2024-11-25 13:04:53.817571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.142 [2024-11-25 13:04:53.817589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.142 [2024-11-25 13:04:53.817596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.142 [2024-11-25 13:04:53.826453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.142 [2024-11-25 13:04:53.826472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.142 [2024-11-25 13:04:53.826479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.142 [2024-11-25 13:04:53.835879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.142 [2024-11-25 13:04:53.835897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.142 [2024-11-25 13:04:53.835904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.142 [2024-11-25 13:04:53.844418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.142 [2024-11-25 13:04:53.844437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.142 [2024-11-25 13:04:53.844443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.142 [2024-11-25 13:04:53.854433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.142 [2024-11-25 13:04:53.854456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.854462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.865665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.865684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.865690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.875681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.875701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.875708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.888077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.888096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.888102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.900190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.900209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.900216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.911975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.911994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.912001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.921592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.921611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.921617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.929913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.929932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.929939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.940550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.940570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.940577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.951838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.951857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.951869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.963060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.963079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.963085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.973879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.973899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.973905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.983697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.983715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.983722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:53.993252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:53.993272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:53.993278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:54.001382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:54.001401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:54.001407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:54.010978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:54.010997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:54.011004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:54.021124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:54.021142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:54.021149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:54.032393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:54.032412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:54.032421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.143 [2024-11-25 13:04:54.043256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.143 [2024-11-25 13:04:54.043275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.143 [2024-11-25 13:04:54.043282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.405 [2024-11-25 13:04:54.052414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.405 [2024-11-25 13:04:54.052433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.405 [2024-11-25 13:04:54.052440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.405 [2024-11-25 13:04:54.061649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.405 [2024-11-25 13:04:54.061668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.405 [2024-11-25 13:04:54.061675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.405 [2024-11-25 13:04:54.070525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.405 [2024-11-25 13:04:54.070544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.405 [2024-11-25 13:04:54.070551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.405 [2024-11-25 13:04:54.081016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.405 [2024-11-25 13:04:54.081035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.405 [2024-11-25 13:04:54.081042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.405 [2024-11-25 13:04:54.091214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.405 [2024-11-25 13:04:54.091233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.405 [2024-11-25 13:04:54.091240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.405 [2024-11-25 13:04:54.100156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.405 [2024-11-25 13:04:54.100176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.405 [2024-11-25 13:04:54.100182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.405 [2024-11-25 13:04:54.110919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.405 [2024-11-25 13:04:54.110938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.405 [2024-11-25 13:04:54.110945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.405 [2024-11-25 13:04:54.120023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.405 [2024-11-25 13:04:54.120044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.405 [2024-11-25 13:04:54.120051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.405 [2024-11-25 13:04:54.130376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.405 [2024-11-25 13:04:54.130395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.405 [2024-11-25 13:04:54.130401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.405 [2024-11-25 13:04:54.141806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.141825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.141831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.151702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.151720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.151726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.161026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.161046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.161052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.170721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.170741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.170748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.179225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.179244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.179251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.190705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.190724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.190731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.201584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.201603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.201610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.211829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.211849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.211856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.221020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.221038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.221045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.231320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.231340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.231346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.241061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.241081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.241087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.249662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.249681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.249688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.259827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.259847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.259853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.269109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.269128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.269135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.277711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.277731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.277737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.288206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.288225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.288235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.406 [2024-11-25 13:04:54.298402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.406 [2024-11-25 13:04:54.298421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.406 [2024-11-25 13:04:54.298428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.667 [2024-11-25 13:04:54.308560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.667 [2024-11-25 13:04:54.308579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.667 [2024-11-25 13:04:54.308585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.667 [2024-11-25 13:04:54.318343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.667 [2024-11-25 13:04:54.318363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.667 [2024-11-25 13:04:54.318369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.667 [2024-11-25 13:04:54.327183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.667 [2024-11-25 13:04:54.327202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.667 [2024-11-25 13:04:54.327209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.667 [2024-11-25 13:04:54.337255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.667 [2024-11-25 13:04:54.337274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.667 [2024-11-25 13:04:54.337281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.667 [2024-11-25 13:04:54.347644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.667 [2024-11-25 13:04:54.347663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.667 [2024-11-25 13:04:54.347670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.667 [2024-11-25 13:04:54.359060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.667 [2024-11-25 13:04:54.359080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.667 [2024-11-25 13:04:54.359086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.667 [2024-11-25 13:04:54.367304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.667 [2024-11-25 13:04:54.367324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.667 [2024-11-25 13:04:54.367331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.667 [2024-11-25 13:04:54.375822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.667 [2024-11-25 13:04:54.375844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.375851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.385867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.385887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.385893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.396398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.396416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.396423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.407062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.407082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.407089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.417297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.417316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.417323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.428375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.428394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.428400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.438008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.438028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.438034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.447993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.448013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.448019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.459006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.459025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.459032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.469333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.469353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.469360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.480211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.480230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.480237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.668 3053.00 IOPS, 381.62 MiB/s [2024-11-25T12:04:54.571Z] [2024-11-25 13:04:54.492577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.492597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.492604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.504122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.504142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.504149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.513212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.513232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.513238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.522973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.522992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.522999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.531502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.531520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.531526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.536394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.536413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.536420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.543945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.543967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.543974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.552770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.552788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.552795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.668 [2024-11-25 13:04:54.561158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.668 [2024-11-25 13:04:54.561178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.668 [2024-11-25 13:04:54.561185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.930 [2024-11-25 13:04:54.571238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.930 [2024-11-25 13:04:54.571257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.930 [2024-11-25 13:04:54.571263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.930 [2024-11-25 13:04:54.582705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.930 [2024-11-25 13:04:54.582725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.930 [2024-11-25 13:04:54.582731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.930 [2024-11-25 13:04:54.591064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.930 [2024-11-25 13:04:54.591083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.930 [2024-11-25 13:04:54.591090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.930 [2024-11-25 13:04:54.598549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.930 [2024-11-25 13:04:54.598567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.930 [2024-11-25 13:04:54.598574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.930 [2024-11-25 13:04:54.608810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.930 [2024-11-25 13:04:54.608829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.930 [2024-11-25 13:04:54.608836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.930 [2024-11-25 13:04:54.617948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.930 [2024-11-25 13:04:54.617967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.930 [2024-11-25 13:04:54.617974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.930 [2024-11-25 13:04:54.625409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.930 [2024-11-25 13:04:54.625428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.930 [2024-11-25 13:04:54.625435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.930 [2024-11-25 13:04:54.634957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.930 [2024-11-25 13:04:54.634976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.930 [2024-11-25 13:04:54.634983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.644852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.644876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.644882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.653448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.653467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.653473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.663007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.663026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.663033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.672742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.672762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.672768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.682141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.682160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.682167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.691251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.691270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.691276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.699994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.700013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.700023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.708642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.708662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.708668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.719116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.719135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.719142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.729872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.729891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.729898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.737574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.737593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.737600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.748569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.748588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.748594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.757868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.757887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.757894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.767525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.767544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.767550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.778500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.778519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.778526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.788016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.788038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.788045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.798054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.798074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.798080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.808540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.808559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.808566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.818766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.818786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.818793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:14.931 [2024-11-25 13:04:54.828808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:14.931 [2024-11-25 13:04:54.828827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.931 [2024-11-25 13:04:54.828834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.840252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.840270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.840277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.849604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.849624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.849631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.859598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.859619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.859625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.869760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.869779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.869785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.879487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.879506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.879513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.889651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.889670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.889677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.898750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.898769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.898776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.904272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.904292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.904298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.913891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.913910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.913917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.923683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.923703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.923710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.932928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.932947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.932954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.942878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.942896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.942903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.954006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.954025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.954034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.962739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.962758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.962765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.972308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.194 [2024-11-25 13:04:54.972327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.194 [2024-11-25 13:04:54.972334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.194 [2024-11-25 13:04:54.979492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:54.979512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:54.979518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:54.985750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:54.985770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:54.985776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:54.996234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:54.996253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:54.996259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:55.006182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:55.006201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:55.006208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:55.015980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:55.015999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:55.016006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:55.025609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:55.025628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:55.025635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:55.036374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:55.036399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:55.036405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:55.046906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:55.046925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:55.046932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:55.057169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:55.057189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:55.057195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:55.068126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:55.068145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:55.068151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:55.077596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:55.077615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:55.077621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:55.086449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:55.086468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:55.086474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.195 [2024-11-25 13:04:55.095726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.195 [2024-11-25 13:04:55.095745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.195 [2024-11-25 13:04:55.095751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.104310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.104329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.104336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.113226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.113245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.113251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.123796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.123814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.123821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.134799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.134818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.134824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.146560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.146578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.146585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.158297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.158316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.158322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.168593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.168613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.168619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.177836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.177853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.177860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.189352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.189371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.189377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.199751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.199771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.199778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.209577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.209597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.209606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.219455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.219475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.219481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.228881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.228901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.228907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.238525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.238544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.238550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.248842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.248866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.248873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.258833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.258852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.258859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.265922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.265942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.265948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.274503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.274522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.274529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.282609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.282628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.282634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.293019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.293039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.293045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.303138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.303157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.303163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.311453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.311472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.311478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.457 [2024-11-25 13:04:55.319654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.457 [2024-11-25 13:04:55.319674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.457 [2024-11-25 13:04:55.319681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.458 [2024-11-25 13:04:55.329659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.458 [2024-11-25 13:04:55.329678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.458 [2024-11-25 13:04:55.329684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.458 [2024-11-25 13:04:55.340709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.458 [2024-11-25 13:04:55.340728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.458 [2024-11-25 13:04:55.340735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.458 [2024-11-25 13:04:55.348621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.458 [2024-11-25 13:04:55.348639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.458 [2024-11-25 13:04:55.348646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.458 [2024-11-25 13:04:55.351596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.458 [2024-11-25 13:04:55.351615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.458 [2024-11-25 13:04:55.351622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.718 [2024-11-25 13:04:55.361704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.718 [2024-11-25 13:04:55.361722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.718 [2024-11-25 13:04:55.361733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.718 [2024-11-25 13:04:55.370054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.718 [2024-11-25 13:04:55.370073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.718 [2024-11-25 13:04:55.370079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.719 [2024-11-25 13:04:55.380187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.380206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.380212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.719 [2024-11-25 13:04:55.388189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.388208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.388214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.719 [2024-11-25 13:04:55.399675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.399694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.399701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.719 [2024-11-25 13:04:55.407203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.407222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.407229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.719 [2024-11-25 13:04:55.413041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.413059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.413066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.719 [2024-11-25 13:04:55.421185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.421204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.421211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.719 [2024-11-25 13:04:55.431760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.431778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.431785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.719 [2024-11-25 13:04:55.443189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.443210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.443216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.719 [2024-11-25 13:04:55.455650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.455669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.455675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:15.719 [2024-11-25 13:04:55.467868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.467887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.467894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:15.719 [2024-11-25 13:04:55.478927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.478946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.478952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:15.719 3156.00 IOPS, 394.50 MiB/s [2024-11-25T12:04:55.622Z] [2024-11-25 13:04:55.489921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9ea40) 00:30:15.719 [2024-11-25 13:04:55.489939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:15.719 [2024-11-25 13:04:55.489946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:15.719 00:30:15.719 Latency(us) 00:30:15.719 [2024-11-25T12:04:55.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.719 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:15.719 nvme0n1 : 2.00 3158.99 394.87 0.00 0.00 5060.24 853.33 12779.52 00:30:15.719 [2024-11-25T12:04:55.622Z] =================================================================================================================== 00:30:15.719 [2024-11-25T12:04:55.622Z] Total : 3158.99 394.87 0.00 0.00 5060.24 853.33 12779.52 00:30:15.719 { 00:30:15.719 "results": [ 00:30:15.719 { 00:30:15.719 "job": "nvme0n1", 00:30:15.719 "core_mask": "0x2", 00:30:15.719 "workload": "randread", 00:30:15.719 "status": "finished", 00:30:15.719 "queue_depth": 16, 00:30:15.719 "io_size": 131072, 00:30:15.719 "runtime": 2.00317, 00:30:15.719 "iops": 3158.9929961011794, 00:30:15.719 "mibps": 394.8741245126474, 00:30:15.719 "io_failed": 0, 00:30:15.719 "io_timeout": 0, 00:30:15.719 "avg_latency_us": 5060.243742098609, 00:30:15.719 "min_latency_us": 853.3333333333334, 00:30:15.719 "max_latency_us": 12779.52 00:30:15.719 } 00:30:15.719 ], 00:30:15.719 "core_count": 1 00:30:15.719 } 00:30:15.719 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:15.719 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:15.719 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:15.719 | .driver_specific 00:30:15.719 | .nvme_error 00:30:15.719 | .status_code 00:30:15.719 | .command_transient_transport_error' 00:30:15.719 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 821285 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 821285 ']' 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 821285 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821285 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821285' 00:30:15.980 killing process with pid 821285 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 821285 00:30:15.980 Received shutdown signal, test time was about 2.000000 seconds 00:30:15.980 00:30:15.980 Latency(us) 00:30:15.980 [2024-11-25T12:04:55.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.980 [2024-11-25T12:04:55.883Z] =================================================================================================================== 00:30:15.980 [2024-11-25T12:04:55.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 821285 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=821967 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 821967 /var/tmp/bperf.sock 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 821967 ']' 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:15.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:15.980 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.981 13:04:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:16.241 [2024-11-25 13:04:55.906904] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:30:16.241 [2024-11-25 13:04:55.906961] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821967 ] 00:30:16.241 [2024-11-25 13:04:55.994975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.241 [2024-11-25 13:04:56.024636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.813 13:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.813 13:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:16.813 13:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:16.813 13:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:17.095 13:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:17.096 13:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.096 13:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:17.096 13:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.096 13:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.096 13:04:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.356 nvme0n1 00:30:17.356 13:04:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:17.356 13:04:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.356 13:04:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:17.356 13:04:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.356 13:04:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:17.356 13:04:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:17.356 Running I/O for 2 seconds... 00:30:17.356 [2024-11-25 13:04:57.203870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ed0b0 00:30:17.356 [2024-11-25 13:04:57.205631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.356 [2024-11-25 13:04:57.205657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:17.356 [2024-11-25 13:04:57.215758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ec840 00:30:17.356 [2024-11-25 13:04:57.217504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.356 [2024-11-25 13:04:57.217522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:17.356 [2024-11-25 13:04:57.227597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ebfd0 00:30:17.356 [2024-11-25 13:04:57.229297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.356 [2024-11-25 13:04:57.229314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:17.356 [2024-11-25 13:04:57.239417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eb760 00:30:17.356 [2024-11-25 13:04:57.241154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.356 [2024-11-25 13:04:57.241170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:17.356 [2024-11-25 13:04:57.251249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eaef0 00:30:17.356 [2024-11-25 13:04:57.252899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.356 [2024-11-25 13:04:57.252915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:17.616 [2024-11-25 13:04:57.263066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ea680 00:30:17.616 [2024-11-25 13:04:57.264698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.616 [2024-11-25 13:04:57.264714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:17.616 [2024-11-25 13:04:57.274947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e1710 00:30:17.616 [2024-11-25 13:04:57.276567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.616 [2024-11-25 13:04:57.276583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:17.616 [2024-11-25 13:04:57.286750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e1f80 00:30:17.616 [2024-11-25 13:04:57.288354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.616 [2024-11-25 13:04:57.288370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:17.616 [2024-11-25 13:04:57.298528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e27f0 00:30:17.616 [2024-11-25 13:04:57.300112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.616 [2024-11-25 13:04:57.300128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:17.616 [2024-11-25 13:04:57.310317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e3060 00:30:17.616 [2024-11-25 13:04:57.311880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.311896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.320618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e38d0 00:30:17.617 [2024-11-25 13:04:57.321550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.321566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.334110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f31b8 00:30:17.617 [2024-11-25 13:04:57.335666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.335682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.345911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f3a28 00:30:17.617 [2024-11-25 13:04:57.347442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.347457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.357696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f4298 00:30:17.617 [2024-11-25 13:04:57.359214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.359230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.369481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f4b08 00:30:17.617 [2024-11-25 13:04:57.371028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.371044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.381315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f5378 00:30:17.617 [2024-11-25 13:04:57.382779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.382795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.393121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f5be8 00:30:17.617 [2024-11-25 13:04:57.394569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.394585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.404903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f6458 00:30:17.617 [2024-11-25 13:04:57.406329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.406345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.416926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f2510 00:30:17.617 [2024-11-25 13:04:57.418345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.418361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.426871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e3060 00:30:17.617 [2024-11-25 13:04:57.427799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.427815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.438639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e38d0 00:30:17.617 [2024-11-25 13:04:57.439559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.439578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.450422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e4140 00:30:17.617 [2024-11-25 13:04:57.451314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.451329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.463155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166efae0 00:30:17.617 [2024-11-25 13:04:57.464049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.464065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.474195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ef270 00:30:17.617 [2024-11-25 13:04:57.475106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.475121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.485992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eea00 00:30:17.617 [2024-11-25 13:04:57.486834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.486849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.497761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ee190 00:30:17.617 [2024-11-25 13:04:57.498583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.498599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:17.617 [2024-11-25 13:04:57.512649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ed920 00:30:17.617 [2024-11-25 13:04:57.514124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.617 [2024-11-25 13:04:57.514140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:17.878 [2024-11-25 13:04:57.523798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eea00 00:30:17.878 [2024-11-25 13:04:57.525248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.878 [2024-11-25 13:04:57.525264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:17.878 [2024-11-25 13:04:57.535566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ee190 00:30:17.878 [2024-11-25 13:04:57.537040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.878 [2024-11-25 13:04:57.537055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:17.878 [2024-11-25 13:04:57.547353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f92c0 00:30:17.878 [2024-11-25 13:04:57.548772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.878 [2024-11-25 13:04:57.548791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:17.878 [2024-11-25 13:04:57.559978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ef270 00:30:17.878 [2024-11-25 13:04:57.561362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.561378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.570987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166efae0 00:30:17.879 [2024-11-25 13:04:57.572366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.572382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.585171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f8a50 00:30:17.879 [2024-11-25 13:04:57.587184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.587200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.596947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f81e0 00:30:17.879 [2024-11-25 13:04:57.598934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.598950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.607234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f7970 00:30:17.879 [2024-11-25 13:04:57.608595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.608611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.618311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f7100 00:30:17.879 [2024-11-25 13:04:57.619636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.619652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.630968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f0bc0 00:30:17.879 [2024-11-25 13:04:57.632275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.632291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.641976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f1430 00:30:17.879 [2024-11-25 13:04:57.643314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.643330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.653766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f1ca0 00:30:17.879 [2024-11-25 13:04:57.655085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.655100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.665561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f2510 00:30:17.879 [2024-11-25 13:04:57.666819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.666835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.679740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e23b8 00:30:17.879 [2024-11-25 13:04:57.681638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.681653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.691593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ec840 00:30:17.879 [2024-11-25 13:04:57.693465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.693481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.701880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ebfd0 00:30:17.879 [2024-11-25 13:04:57.703133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.703149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.713776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eaef0 00:30:17.879 [2024-11-25 13:04:57.715020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.715036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.724926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f35f0 00:30:17.879 [2024-11-25 13:04:57.726155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.726171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.736731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f3e60 00:30:17.879 [2024-11-25 13:04:57.737937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.737953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.748521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f46d0 00:30:17.879 [2024-11-25 13:04:57.749704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.749720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.760311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f4f40 00:30:17.879 [2024-11-25 13:04:57.761480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.761495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:17.879 [2024-11-25 13:04:57.772932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166de470 00:30:17.879 [2024-11-25 13:04:57.774126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:17.879 [2024-11-25 13:04:57.774141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:18.141 [2024-11-25 13:04:57.784828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e9e10 00:30:18.141 [2024-11-25 13:04:57.786004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.141 [2024-11-25 13:04:57.786020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:18.141 [2024-11-25 13:04:57.798272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f6458 00:30:18.141 [2024-11-25 13:04:57.800056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.141 [2024-11-25 13:04:57.800072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:18.141 [2024-11-25 13:04:57.810219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f5be8 00:30:18.141 [2024-11-25 13:04:57.811984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.141 [2024-11-25 13:04:57.811999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:18.141 [2024-11-25 13:04:57.822008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f5378 00:30:18.141 [2024-11-25 13:04:57.823760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.141 [2024-11-25 13:04:57.823775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:18.141 [2024-11-25 13:04:57.833789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f4b08 00:30:18.141 [2024-11-25 13:04:57.835518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.141 [2024-11-25 13:04:57.835535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:18.141 [2024-11-25 13:04:57.845585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f4298 00:30:18.142 [2024-11-25 13:04:57.847291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.847306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.857379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f3a28 00:30:18.142 [2024-11-25 13:04:57.859098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.859116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.869190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f31b8 00:30:18.142 [2024-11-25 13:04:57.870853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.870872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.880978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f2948 00:30:18.142 [2024-11-25 13:04:57.882622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.882638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.892769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f20d8 00:30:18.142 [2024-11-25 13:04:57.894400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.894416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.904557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f1868 00:30:18.142 [2024-11-25 13:04:57.906165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.906181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.916361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f0ff8 00:30:18.142 [2024-11-25 13:04:57.917946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.917962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.928151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f0788 00:30:18.142 [2024-11-25 13:04:57.929711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.929728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.939943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eff18 00:30:18.142 [2024-11-25 13:04:57.941483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.941498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.950248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ef6a8 00:30:18.142 [2024-11-25 13:04:57.951158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.951174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.961337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eee38 00:30:18.142 [2024-11-25 13:04:57.962225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.962241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.975536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f8618 00:30:18.142 [2024-11-25 13:04:57.977055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.977070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.987355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f8e88 00:30:18.142 [2024-11-25 13:04:57.988848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:57.988866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:57.999146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f96f8 00:30:18.142 [2024-11-25 13:04:58.000618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:58.000634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:58.010927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ed4e8 00:30:18.142 [2024-11-25 13:04:58.012389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:58.012406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:58.022715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166edd58 00:30:18.142 [2024-11-25 13:04:58.024153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:58.024169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:18.142 [2024-11-25 13:04:58.034508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166fb048 00:30:18.142 [2024-11-25 13:04:58.035921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.142 [2024-11-25 13:04:58.035937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.046302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166fdeb0 00:30:18.403 [2024-11-25 13:04:58.047694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.047710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.060514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f8e88 00:30:18.403 [2024-11-25 13:04:58.062549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.062565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.072322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f8618 00:30:18.403 [2024-11-25 13:04:58.074322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.074338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.084111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f7da8 00:30:18.403 [2024-11-25 13:04:58.086111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.086127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.096080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f7538 00:30:18.403 [2024-11-25 13:04:58.098056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.098072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.106374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f6cc8 00:30:18.403 [2024-11-25 13:04:58.107716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.107733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.117466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e1710 00:30:18.403 [2024-11-25 13:04:58.118762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.118777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.129268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e1f80 00:30:18.403 [2024-11-25 13:04:58.130546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.130561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.141088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ecc78 00:30:18.403 [2024-11-25 13:04:58.142342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.142357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.152901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ec408 00:30:18.403 [2024-11-25 13:04:58.154159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.154175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.164702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ebb98 00:30:18.403 [2024-11-25 13:04:58.165924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.165942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.176494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eb328 00:30:18.403 [2024-11-25 13:04:58.177709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.177725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.188300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eaab8 00:30:18.403 [2024-11-25 13:04:58.189477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.189493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:18.403 21389.00 IOPS, 83.55 MiB/s [2024-11-25T12:04:58.306Z] [2024-11-25 13:04:58.201324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eb760 00:30:18.403 [2024-11-25 13:04:58.202669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.202685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.213120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ebfd0 00:30:18.403 [2024-11-25 13:04:58.214435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.214450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.225723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e9168 00:30:18.403 [2024-11-25 13:04:58.227053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.227068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.237663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e8088 00:30:18.403 [2024-11-25 13:04:58.238974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.238990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.248750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ec840 00:30:18.403 [2024-11-25 13:04:58.250083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.250098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.260568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e23b8 00:30:18.403 [2024-11-25 13:04:58.261846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.261866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.272374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166fb8b8 00:30:18.403 [2024-11-25 13:04:58.273637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.273653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.284176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e12d8 00:30:18.403 [2024-11-25 13:04:58.285414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.285430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:18.403 [2024-11-25 13:04:58.298372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e9168 00:30:18.403 [2024-11-25 13:04:58.300255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.403 [2024-11-25 13:04:58.300271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.310188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166fa3a0 00:30:18.664 [2024-11-25 13:04:58.312041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.312057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.320484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f9b30 00:30:18.664 [2024-11-25 13:04:58.321707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.321723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.331653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e01f8 00:30:18.664 [2024-11-25 13:04:58.332852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.332873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.343446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166df988 00:30:18.664 [2024-11-25 13:04:58.344625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.344641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.355225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f6458 00:30:18.664 [2024-11-25 13:04:58.356388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.356403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.369425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e4140 00:30:18.664 [2024-11-25 13:04:58.371225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.371241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.381224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e38d0 00:30:18.664 [2024-11-25 13:04:58.383009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.383025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.393024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e3060 00:30:18.664 [2024-11-25 13:04:58.394834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.394851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.404878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e27f0 00:30:18.664 [2024-11-25 13:04:58.406610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.406626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.414479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e4578 00:30:18.664 [2024-11-25 13:04:58.415567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.415583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.427133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166df550 00:30:18.664 [2024-11-25 13:04:58.428213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.428229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.440563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e95a0 00:30:18.664 [2024-11-25 13:04:58.442277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.442292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.452348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e1710 00:30:18.664 [2024-11-25 13:04:58.454046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.454062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.464211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f6cc8 00:30:18.664 [2024-11-25 13:04:58.465886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.465901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.476009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f7538 00:30:18.664 [2024-11-25 13:04:58.477665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.477684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.487809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f7da8 00:30:18.664 [2024-11-25 13:04:58.489447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.489462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.498119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f8618 00:30:18.664 [2024-11-25 13:04:58.499126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.499142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.509211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f8e88 00:30:18.664 [2024-11-25 13:04:58.510176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.510191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.521018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f96f8 00:30:18.664 [2024-11-25 13:04:58.521959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.521974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.532810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ed4e8 00:30:18.664 [2024-11-25 13:04:58.533739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.533754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.545400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eff18 00:30:18.664 [2024-11-25 13:04:58.546313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.664 [2024-11-25 13:04:58.546329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:18.664 [2024-11-25 13:04:58.558830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e73e0 00:30:18.665 [2024-11-25 13:04:58.560382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.665 [2024-11-25 13:04:58.560398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:18.924 [2024-11-25 13:04:58.570618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e6b70 00:30:18.924 [2024-11-25 13:04:58.572149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.924 [2024-11-25 13:04:58.572164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:18.924 [2024-11-25 13:04:58.580907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e6300 00:30:18.924 [2024-11-25 13:04:58.581806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.924 [2024-11-25 13:04:58.581822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:18.924 [2024-11-25 13:04:58.591984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166de8a8 00:30:18.924 [2024-11-25 13:04:58.592852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.924 [2024-11-25 13:04:58.592869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:18.924 [2024-11-25 13:04:58.604574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ea248 00:30:18.924 [2024-11-25 13:04:58.605434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.924 [2024-11-25 13:04:58.605450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:18.924 [2024-11-25 13:04:58.618010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166fdeb0 00:30:18.924 [2024-11-25 13:04:58.619503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.924 [2024-11-25 13:04:58.619519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:18.924 [2024-11-25 13:04:58.628310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166fe720 00:30:18.924 [2024-11-25 13:04:58.629171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.924 [2024-11-25 13:04:58.629187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:18.924 [2024-11-25 13:04:58.639415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166fc560 00:30:18.924 [2024-11-25 13:04:58.640243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.924 [2024-11-25 13:04:58.640258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.924 [2024-11-25 13:04:58.652045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166fd640 00:30:18.924 [2024-11-25 13:04:58.652861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.924 [2024-11-25 13:04:58.652878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:18.924 [2024-11-25 13:04:58.666223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166fcdd0 00:30:18.924 [2024-11-25 13:04:58.667700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.924 [2024-11-25 13:04:58.667717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:18.924 [2024-11-25 13:04:58.678074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166df118 00:30:18.925 [2024-11-25 13:04:58.679543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.679559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.689213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ee5c8 00:30:18.925 [2024-11-25 13:04:58.690652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.690668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.703390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ee5c8 00:30:18.925 [2024-11-25 13:04:58.705475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.705491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.715263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166eee38 00:30:18.925 [2024-11-25 13:04:58.717316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.717332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.725545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ef6a8 00:30:18.925 [2024-11-25 13:04:58.726968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.726984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.737475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f0788 00:30:18.925 [2024-11-25 13:04:58.738887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.738902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.748564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f0ff8 00:30:18.925 [2024-11-25 13:04:58.749953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.749969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.760365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f1868 00:30:18.925 [2024-11-25 13:04:58.761727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.761743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.772238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e6300 00:30:18.925 [2024-11-25 13:04:58.773605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.773620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.784062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e6b70 00:30:18.925 [2024-11-25 13:04:58.785398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.785413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.798221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f1868 00:30:18.925 [2024-11-25 13:04:58.800192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.800208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.809991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f20d8 00:30:18.925 [2024-11-25 13:04:58.811948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.811965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:18.925 [2024-11-25 13:04:58.820288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f2948 00:30:18.925 [2024-11-25 13:04:58.821617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:18.925 [2024-11-25 13:04:58.821633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:19.185 [2024-11-25 13:04:58.833792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e73e0 00:30:19.185 [2024-11-25 13:04:58.835738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.185 [2024-11-25 13:04:58.835754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:19.185 [2024-11-25 13:04:58.845603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e7c50 00:30:19.185 [2024-11-25 13:04:58.847562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.185 [2024-11-25 13:04:58.847577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:19.185 [2024-11-25 13:04:58.855903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e84c0 00:30:19.185 [2024-11-25 13:04:58.857197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.185 [2024-11-25 13:04:58.857213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:19.185 [2024-11-25 13:04:58.867039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f3a28 00:30:19.185 [2024-11-25 13:04:58.868304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.185 [2024-11-25 13:04:58.868320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:19.185 [2024-11-25 13:04:58.881215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e8d30 00:30:19.185 [2024-11-25 13:04:58.883119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.185 [2024-11-25 13:04:58.883134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:19.185 [2024-11-25 13:04:58.893003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166fa7d8 00:30:19.185 [2024-11-25 13:04:58.894880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.185 [2024-11-25 13:04:58.894898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:19.185 [2024-11-25 13:04:58.902590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f7538 00:30:19.185 [2024-11-25 13:04:58.903826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:58.903842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:58.914376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f6cc8 00:30:19.186 [2024-11-25 13:04:58.915596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:58.915612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:58.928529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ecc78 00:30:19.186 [2024-11-25 13:04:58.930396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:58.930412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:58.940546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e5ec8 00:30:19.186 [2024-11-25 13:04:58.942389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:58.942405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:58.950518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e5220 00:30:19.186 [2024-11-25 13:04:58.951874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:58.951889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:58.962305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f9b30 00:30:19.186 [2024-11-25 13:04:58.963642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:58.963657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:58.974931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f7538 00:30:19.186 [2024-11-25 13:04:58.976246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:58.976261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:58.985930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f7da8 00:30:19.186 [2024-11-25 13:04:58.987241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:58.987257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:58.997714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f8618 00:30:19.186 [2024-11-25 13:04:58.999038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:58.999054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:59.009506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f8e88 00:30:19.186 [2024-11-25 13:04:59.010780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:59.010796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:59.023704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e88f8 00:30:19.186 [2024-11-25 13:04:59.025610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:59.025625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:59.033994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e0ea0 00:30:19.186 [2024-11-25 13:04:59.035258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:59.035274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:59.045057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166fb480 00:30:19.186 [2024-11-25 13:04:59.046292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:59.046308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:59.057637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f96f8 00:30:19.186 [2024-11-25 13:04:59.058906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:59.058922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:59.069558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ddc00 00:30:19.186 [2024-11-25 13:04:59.070811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:59.070827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:19.186 [2024-11-25 13:04:59.081481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f1430 00:30:19.186 [2024-11-25 13:04:59.082714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.186 [2024-11-25 13:04:59.082730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:19.446 [2024-11-25 13:04:59.094967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f2510 00:30:19.446 [2024-11-25 13:04:59.097025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.446 [2024-11-25 13:04:59.097040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:19.446 [2024-11-25 13:04:59.106920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f1ca0 00:30:19.446 [2024-11-25 13:04:59.108767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.446 [2024-11-25 13:04:59.108783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:19.446 [2024-11-25 13:04:59.117205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f1430 00:30:19.446 [2024-11-25 13:04:59.118425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.446 [2024-11-25 13:04:59.118441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:19.446 [2024-11-25 13:04:59.128291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f0bc0 00:30:19.446 [2024-11-25 13:04:59.129477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.446 [2024-11-25 13:04:59.129493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:19.446 [2024-11-25 13:04:59.140310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f4b08 00:30:19.446 [2024-11-25 13:04:59.141486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.446 [2024-11-25 13:04:59.141502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:19.446 [2024-11-25 13:04:59.153341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f7538 00:30:19.446 [2024-11-25 13:04:59.154709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.446 [2024-11-25 13:04:59.154725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:19.446 [2024-11-25 13:04:59.166745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166f8618 00:30:19.446 [2024-11-25 13:04:59.168723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.446 [2024-11-25 13:04:59.168739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:19.446 [2024-11-25 13:04:59.178617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166e9e10 00:30:19.446 [2024-11-25 13:04:59.180579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.446 [2024-11-25 13:04:59.180594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:19.446 [2024-11-25 13:04:59.190386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7880) with pdu=0x2000166ea680 00:30:19.446 [2024-11-25 13:04:59.192323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:19.446 [2024-11-25 13:04:59.192339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:19.446 21442.50 IOPS, 83.76 MiB/s 00:30:19.446 Latency(us) 00:30:19.446 [2024-11-25T12:04:59.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.446 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.446 nvme0n1 : 2.00 21450.84 83.79 0.00 0.00 5961.18 2252.80 16384.00 00:30:19.446 [2024-11-25T12:04:59.349Z] =================================================================================================================== 00:30:19.446 [2024-11-25T12:04:59.349Z] Total : 21450.84 83.79 0.00 0.00 5961.18 2252.80 16384.00 00:30:19.446 { 00:30:19.446 "results": [ 00:30:19.446 { 00:30:19.446 "job": "nvme0n1", 00:30:19.446 "core_mask": "0x2", 00:30:19.446 "workload": "randwrite", 00:30:19.446 "status": "finished", 00:30:19.446 "queue_depth": 128, 00:30:19.446 "io_size": 4096, 00:30:19.446 "runtime": 2.002253, 00:30:19.446 "iops": 21450.835633658684, 00:30:19.446 "mibps": 83.79232669397923, 00:30:19.446 "io_failed": 0, 00:30:19.446 "io_timeout": 0, 00:30:19.446 "avg_latency_us": 5961.180912999612, 00:30:19.446 "min_latency_us": 2252.8, 00:30:19.446 "max_latency_us": 16384.0 00:30:19.446 } 00:30:19.446 ], 00:30:19.446 "core_count": 1 00:30:19.446 } 00:30:19.446 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:19.446 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:19.446 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:19.446 | .driver_specific 00:30:19.446 | .nvme_error 00:30:19.446 | .status_code 00:30:19.446 | .command_transient_transport_error' 00:30:19.446 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 821967 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 821967 ']' 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 821967 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 821967 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 821967' 00:30:19.704 killing process with pid 821967 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 821967 00:30:19.704 Received shutdown signal, test time was about 2.000000 seconds 00:30:19.704 00:30:19.704 Latency(us) 00:30:19.704 [2024-11-25T12:04:59.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.704 [2024-11-25T12:04:59.607Z] =================================================================================================================== 00:30:19.704 [2024-11-25T12:04:59.607Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 821967 00:30:19.704 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=822673 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 822673 /var/tmp/bperf.sock 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 822673 ']' 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:19.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.705 13:04:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:19.964 [2024-11-25 13:04:59.613508] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:30:19.964 [2024-11-25 13:04:59.613563] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822673 ] 00:30:19.965 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:19.965 Zero copy mechanism will not be used. 00:30:19.965 [2024-11-25 13:04:59.704243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.965 [2024-11-25 13:04:59.733266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.535 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.535 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:20.535 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:20.535 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:20.796 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:20.796 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.796 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:20.796 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.796 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.796 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.056 nvme0n1 00:30:21.317 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:21.317 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.318 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:21.318 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.318 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:21.318 13:05:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:21.318 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:21.318 Zero copy mechanism will not be used. 00:30:21.318 Running I/O for 2 seconds... 00:30:21.318 [2024-11-25 13:05:01.077784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.077904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.077930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.086924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.087005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.087024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.095458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.095552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.095569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.103585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.103835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.103853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.109788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.109989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.110006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.114763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.114965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.114983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.124221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.124467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.124485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.129635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.129825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.129842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.133858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.134053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.134070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.141204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.141398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.141415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.147411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.147707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.147724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.154981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.155204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.155220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.162128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.162434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.162451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.169404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.169592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.169608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.177627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.177844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.177865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.186802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.187066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.187082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.194658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.194919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.194935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.202331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.202519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.202536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.211268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.211531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.211547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.318 [2024-11-25 13:05:01.217124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.318 [2024-11-25 13:05:01.217393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.318 [2024-11-25 13:05:01.217409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.223639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.223928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.223946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.230663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.230947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.230963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.238769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.239032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.239048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.245111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.245412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.245428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.253044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.253374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.253391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.259915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.260154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.260177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.267174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.267368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.267385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.273434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.273625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.273641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.277743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.277895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.277911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.286378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.286492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.286508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.293846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.294160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.294176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.300590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.580 [2024-11-25 13:05:01.300844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.580 [2024-11-25 13:05:01.300866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.580 [2024-11-25 13:05:01.307889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.308174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.308190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.315127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.315432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.315448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.323052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.323305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.323322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.333102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.333359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.333375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.339430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.339608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.339623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.348218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.348453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.348470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.357450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.357738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.357754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.366022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.366373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.366389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.371841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.372017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.372034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.380349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.380601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.380618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.386641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.386899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.386915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.392655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.392873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.392890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.401382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.401613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.401629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.407279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.407531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.407547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.415222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.415445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.415462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.420672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.420989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.421005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.426490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.426657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.426674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.433435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.433708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.433725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.441321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.441599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.441615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.447895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.448060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.448079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.454503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.454770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.454787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.462193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.462478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.462495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.470081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.470341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.470357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.581 [2024-11-25 13:05:01.475771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.581 [2024-11-25 13:05:01.475951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.581 [2024-11-25 13:05:01.475967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.483387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.483653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.483670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.489922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.490087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.490103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.496151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.496328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.496345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.503521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.503702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.503718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.509633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.509813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.509831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.517423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.517737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.517754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.524112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.524379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.524395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.532680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.532856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.532878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.540000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.540321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.540338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.544527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.544702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.544719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.552731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.553054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.553071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.559361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.559539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.559556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.566175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.566278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.566293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.574240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.574499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.574516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.581491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.581786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.581802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.587723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.843 [2024-11-25 13:05:01.587902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.843 [2024-11-25 13:05:01.587919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.843 [2024-11-25 13:05:01.595282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.595458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.595473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.604002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.604308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.604325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.612875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.613064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.613081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.620667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.620914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.620931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.629528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.629756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.629772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.639743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.640008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.640028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.648766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.649015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.649031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.657262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.657556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.657572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.666438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.666719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.666736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.675207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.675477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.675493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.682934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.683197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.683213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.690419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.690674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.690691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.698456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.698683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.698700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.707813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.708055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.708072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.716396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.716565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.716581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.725338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.725620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.725637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.734946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.735238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.735255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:21.844 [2024-11-25 13:05:01.742249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:21.844 [2024-11-25 13:05:01.742494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.844 [2024-11-25 13:05:01.742510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.750074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.750295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.750312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.758752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.759018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.759035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.765859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.766160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.766176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.772735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.773005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.773022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.780960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.781215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.781232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.786872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.787045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.787062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.792811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.793060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.793076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.797881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.798056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.798073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.805636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.805948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.805965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.812281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.812469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.812485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.818068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.818371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.818388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.826080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.826372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.826389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.831239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.831413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.831429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.839696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.839989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.840009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.848988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.849265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.849282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.857925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.858187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.858203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.866643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.866834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.866851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.874126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.874299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.874316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.882003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.882260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.882277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.891226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.891453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.891469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.900488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.900710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.900726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.107 [2024-11-25 13:05:01.908235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.107 [2024-11-25 13:05:01.908434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.107 [2024-11-25 13:05:01.908451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.108 [2024-11-25 13:05:01.916570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.108 [2024-11-25 13:05:01.916880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.108 [2024-11-25 13:05:01.916900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.108 [2024-11-25 13:05:01.926313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.108 [2024-11-25 13:05:01.926625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.108 [2024-11-25 13:05:01.926642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.108 [2024-11-25 13:05:01.934365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.108 [2024-11-25 13:05:01.934614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.108 [2024-11-25 13:05:01.934630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.108 [2024-11-25 13:05:01.943330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.108 [2024-11-25 13:05:01.943539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.108 [2024-11-25 13:05:01.943556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.108 [2024-11-25 13:05:01.950959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.108 [2024-11-25 13:05:01.951247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.108 [2024-11-25 13:05:01.951264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.108 [2024-11-25 13:05:01.959255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.108 [2024-11-25 13:05:01.959499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.108 [2024-11-25 13:05:01.959515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.108 [2024-11-25 13:05:01.966044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.108 [2024-11-25 13:05:01.966249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.108 [2024-11-25 13:05:01.966265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.108 [2024-11-25 13:05:01.974223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.108 [2024-11-25 13:05:01.974508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.108 [2024-11-25 13:05:01.974525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.108 [2024-11-25 13:05:01.982283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.108 [2024-11-25 13:05:01.982562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.108 [2024-11-25 13:05:01.982579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.108 [2024-11-25 13:05:01.991564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.108 [2024-11-25 13:05:01.991837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.108 [2024-11-25 13:05:01.991853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.108 [2024-11-25 13:05:02.001447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.108 [2024-11-25 13:05:02.001667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.108 [2024-11-25 13:05:02.001684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.368 [2024-11-25 13:05:02.011419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.368 [2024-11-25 13:05:02.011656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.368 [2024-11-25 13:05:02.011673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.368 [2024-11-25 13:05:02.022540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.368 [2024-11-25 13:05:02.022732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.368 [2024-11-25 13:05:02.022748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.368 [2024-11-25 13:05:02.033773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.368 [2024-11-25 13:05:02.034014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.368 [2024-11-25 13:05:02.034031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.368 [2024-11-25 13:05:02.044998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.368 [2024-11-25 13:05:02.045278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.368 [2024-11-25 13:05:02.045295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.368 [2024-11-25 13:05:02.056477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.056783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.056800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.067912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.068191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.068208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.369 3995.00 IOPS, 499.38 MiB/s [2024-11-25T12:05:02.272Z] [2024-11-25 13:05:02.079577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.080068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.080087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.091170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.091451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.091468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.102423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.102654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.102670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.113430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.113848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.113869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.124533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.124787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.124803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.135839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.136118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.136134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.147245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.147490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.147506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.158323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.158494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.158511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.170011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.170236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.170252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.180878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.181162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.181181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.192063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.192314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.192331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.202788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.203031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.203047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.213583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.213850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.213870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.224826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.225132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.225149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.235649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.235915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.235930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.246240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.246512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.246529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.256363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.256630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.256647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.369 [2024-11-25 13:05:02.261689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.369 [2024-11-25 13:05:02.261981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.369 [2024-11-25 13:05:02.261998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.370 [2024-11-25 13:05:02.267846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.370 [2024-11-25 13:05:02.268072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.370 [2024-11-25 13:05:02.268088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.272483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.272619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.272635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.280271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.280570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.280587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.288315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.288461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.288478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.298284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.298423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.298439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.304610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.304937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.304954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.311284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.311608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.311624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.317462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.317671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.317688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.323715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.323867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.323884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.329345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.329485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.329501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.334705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.334843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.334859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.341775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.342037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.342054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.347647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.347780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.347797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.352516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.352726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.352743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.360280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.360533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.360549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.367590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.367831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.367847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.374039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.374175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.374191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.382254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.382569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.382589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.389835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.390117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.390134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.395527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.395666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.395682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.402026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.402288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.402304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.407888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.408024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.631 [2024-11-25 13:05:02.408040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.631 [2024-11-25 13:05:02.414813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.631 [2024-11-25 13:05:02.415122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.415139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.420975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.421115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.421131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.426923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.427173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.427189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.432829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.433054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.433070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.439577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.439841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.439857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.447878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.448142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.448159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.454927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.455177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.455193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.463678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.463990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.464007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.471034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.471289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.471306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.478379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.478606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.478623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.487649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.487897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.487914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.494354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.494489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.494505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.500965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.501108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.501124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.508713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.508974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.508991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.515369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.515647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.515664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.632 [2024-11-25 13:05:02.522870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.632 [2024-11-25 13:05:02.523145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.632 [2024-11-25 13:05:02.523161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.532678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.532941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.532958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.538644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.538914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.538931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.542736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.542881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.542896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.547444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.547757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.547773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.554431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.554742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.554758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.560232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.560375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.560397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.566327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.566650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.566666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.573859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.574000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.574016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.580743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.580877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.580893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.586023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.586206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.586222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.595694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.595983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.596000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.602409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.602545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.602562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.610441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.610703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.610720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.618061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.618293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.618310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.628174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.628452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.628468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.636502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.636697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.636714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.643624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.643749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.643765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.651480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.651697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.651713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.659026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.659305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.659321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.667280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.667576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.667593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.673994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.674174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.674190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.682245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.682547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.682563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.690870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.691130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.691147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.699272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.699526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.699543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.894 [2024-11-25 13:05:02.707252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.894 [2024-11-25 13:05:02.707510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.894 [2024-11-25 13:05:02.707526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.895 [2024-11-25 13:05:02.714681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.895 [2024-11-25 13:05:02.715038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.895 [2024-11-25 13:05:02.715054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.895 [2024-11-25 13:05:02.721436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.895 [2024-11-25 13:05:02.721662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.895 [2024-11-25 13:05:02.721678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.895 [2024-11-25 13:05:02.729067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.895 [2024-11-25 13:05:02.729285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.895 [2024-11-25 13:05:02.729302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.895 [2024-11-25 13:05:02.736221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.895 [2024-11-25 13:05:02.736532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.895 [2024-11-25 13:05:02.736549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.895 [2024-11-25 13:05:02.744976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.895 [2024-11-25 13:05:02.745255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.895 [2024-11-25 13:05:02.745272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.895 [2024-11-25 13:05:02.751613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.895 [2024-11-25 13:05:02.751829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.895 [2024-11-25 13:05:02.751849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.895 [2024-11-25 13:05:02.758838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.895 [2024-11-25 13:05:02.759102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.895 [2024-11-25 13:05:02.759122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:22.895 [2024-11-25 13:05:02.766380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.895 [2024-11-25 13:05:02.766680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.895 [2024-11-25 13:05:02.766697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:22.895 [2024-11-25 13:05:02.773679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.895 [2024-11-25 13:05:02.773945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.895 [2024-11-25 13:05:02.773961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:22.895 [2024-11-25 13:05:02.780766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.895 [2024-11-25 13:05:02.781007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.895 [2024-11-25 13:05:02.781024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:22.895 [2024-11-25 13:05:02.787710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:22.895 [2024-11-25 13:05:02.787974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.895 [2024-11-25 13:05:02.787991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.795227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.795489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.795506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.801925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.802059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.802076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.811894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.812162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.812178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.818407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.818538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.818553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.823157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.823293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.823310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.830769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.831026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.831042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.835528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.835667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.835683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.842349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.842493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.842510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.848208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.848472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.848488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.855428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.855701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.855718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.864703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.864971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.864988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.872962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.873220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.873237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.879706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.879841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.879856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.885769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.885901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.885917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.890195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.890330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.890345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.896726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.896871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.896887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.903643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.903792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.903808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.908518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.908725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.908742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.917883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.918151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.918167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.928587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.928828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.928845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.940297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.940555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.940571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.951745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.952027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.952047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.962554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.962789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.962806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.973343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.973581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.973597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.984435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.984702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.155 [2024-11-25 13:05:02.984719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.155 [2024-11-25 13:05:02.995534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.155 [2024-11-25 13:05:02.995766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.156 [2024-11-25 13:05:02.995783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.156 [2024-11-25 13:05:03.006109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.156 [2024-11-25 13:05:03.006366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.156 [2024-11-25 13:05:03.006383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.156 [2024-11-25 13:05:03.016380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.156 [2024-11-25 13:05:03.016604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.156 [2024-11-25 13:05:03.016621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.156 [2024-11-25 13:05:03.027348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.156 [2024-11-25 13:05:03.027669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.156 [2024-11-25 13:05:03.027686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.156 [2024-11-25 13:05:03.038086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.156 [2024-11-25 13:05:03.038387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.156 [2024-11-25 13:05:03.038404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.156 [2024-11-25 13:05:03.048342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.156 [2024-11-25 13:05:03.048565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.156 [2024-11-25 13:05:03.048581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:23.421 [2024-11-25 13:05:03.058870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.421 [2024-11-25 13:05:03.059069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.421 [2024-11-25 13:05:03.059086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:23.421 [2024-11-25 13:05:03.069745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.421 [2024-11-25 13:05:03.070090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.421 [2024-11-25 13:05:03.070106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:23.421 [2024-11-25 13:05:03.080692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xba7bc0) with pdu=0x2000166ff3c8 00:30:23.421 [2024-11-25 13:05:03.081538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.421 [2024-11-25 13:05:03.081556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:23.421 3937.00 IOPS, 492.12 MiB/s 00:30:23.421 Latency(us) 00:30:23.421 [2024-11-25T12:05:03.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.421 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:23.421 nvme0n1 : 2.01 3933.73 491.72 0.00 0.00 4060.24 1761.28 12069.55 00:30:23.421 [2024-11-25T12:05:03.324Z] =================================================================================================================== 00:30:23.421 [2024-11-25T12:05:03.324Z] Total : 3933.73 491.72 0.00 0.00 4060.24 1761.28 12069.55 00:30:23.421 { 00:30:23.421 "results": [ 00:30:23.421 { 00:30:23.421 "job": "nvme0n1", 00:30:23.421 "core_mask": "0x2", 00:30:23.421 "workload": "randwrite", 00:30:23.421 "status": "finished", 00:30:23.421 "queue_depth": 16, 00:30:23.421 "io_size": 131072, 00:30:23.421 "runtime": 2.005474, 00:30:23.421 "iops": 3933.733371761489, 00:30:23.421 "mibps": 491.7166714701861, 00:30:23.421 "io_failed": 0, 00:30:23.421 "io_timeout": 0, 00:30:23.421 "avg_latency_us": 4060.241558287911, 00:30:23.421 "min_latency_us": 1761.28, 00:30:23.421 "max_latency_us": 12069.546666666667 00:30:23.421 } 00:30:23.421 ], 00:30:23.421 "core_count": 1 00:30:23.421 } 00:30:23.421 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:23.421 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:23.421 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:23.421 | .driver_specific 00:30:23.421 | .nvme_error 00:30:23.421 | .status_code 00:30:23.421 | .command_transient_transport_error' 00:30:23.421 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:23.421 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 255 > 0 )) 00:30:23.421 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 822673 00:30:23.421 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 822673 ']' 00:30:23.421 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 822673 00:30:23.421 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:23.421 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.421 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 822673 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 822673' 00:30:23.716 killing process with pid 822673 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 822673 00:30:23.716 Received shutdown signal, test time was about 2.000000 seconds 00:30:23.716 00:30:23.716 Latency(us) 00:30:23.716 [2024-11-25T12:05:03.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.716 [2024-11-25T12:05:03.619Z] =================================================================================================================== 00:30:23.716 [2024-11-25T12:05:03.619Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 822673 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 820392 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 820392 ']' 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 820392 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 820392 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 820392' 00:30:23.716 killing process with pid 820392 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 820392 00:30:23.716 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 820392 00:30:24.024 00:30:24.024 real 0m16.273s 00:30:24.024 user 0m32.307s 00:30:24.024 sys 0m3.432s 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:24.024 ************************************ 00:30:24.024 END TEST nvmf_digest_error 00:30:24.024 ************************************ 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.024 rmmod nvme_tcp 00:30:24.024 rmmod nvme_fabrics 00:30:24.024 rmmod nvme_keyring 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 820392 ']' 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 820392 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 820392 ']' 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 820392 00:30:24.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (820392) - No such process 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 820392 is not found' 00:30:24.024 Process with pid 820392 is not found 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.024 13:05:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.582 13:05:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.582 00:30:26.582 real 0m43.730s 00:30:26.582 user 1m7.300s 00:30:26.582 sys 0m13.460s 00:30:26.582 13:05:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.582 13:05:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:26.582 ************************************ 00:30:26.582 END TEST nvmf_digest 00:30:26.582 ************************************ 00:30:26.582 13:05:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:26.582 13:05:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:26.582 13:05:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:26.582 13:05:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:26.582 13:05:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:26.582 13:05:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.582 13:05:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.582 ************************************ 00:30:26.582 START TEST nvmf_bdevperf 00:30:26.582 ************************************ 00:30:26.582 13:05:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:26.582 * Looking for test storage... 00:30:26.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:26.582 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:26.582 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:26.582 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:26.582 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:26.582 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.582 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.582 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.582 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.583 --rc genhtml_branch_coverage=1 00:30:26.583 --rc genhtml_function_coverage=1 00:30:26.583 --rc genhtml_legend=1 00:30:26.583 --rc geninfo_all_blocks=1 00:30:26.583 --rc geninfo_unexecuted_blocks=1 00:30:26.583 00:30:26.583 ' 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.583 --rc genhtml_branch_coverage=1 00:30:26.583 --rc genhtml_function_coverage=1 00:30:26.583 --rc genhtml_legend=1 00:30:26.583 --rc geninfo_all_blocks=1 00:30:26.583 --rc geninfo_unexecuted_blocks=1 00:30:26.583 00:30:26.583 ' 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.583 --rc genhtml_branch_coverage=1 00:30:26.583 --rc genhtml_function_coverage=1 00:30:26.583 --rc genhtml_legend=1 00:30:26.583 --rc geninfo_all_blocks=1 00:30:26.583 --rc geninfo_unexecuted_blocks=1 00:30:26.583 00:30:26.583 ' 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.583 --rc genhtml_branch_coverage=1 00:30:26.583 --rc genhtml_function_coverage=1 00:30:26.583 --rc genhtml_legend=1 00:30:26.583 --rc geninfo_all_blocks=1 00:30:26.583 --rc geninfo_unexecuted_blocks=1 00:30:26.583 00:30:26.583 ' 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:26.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.583 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.584 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.584 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:26.584 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:26.584 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.584 13:05:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.752 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:34.753 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:34.753 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:34.753 Found net devices under 0000:31:00.0: cvl_0_0 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:34.753 Found net devices under 0000:31:00.1: cvl_0_1 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.753 13:05:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:30:34.753 00:30:34.753 --- 10.0.0.2 ping statistics --- 00:30:34.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.753 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:30:34.753 00:30:34.753 --- 10.0.0.1 ping statistics --- 00:30:34.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.753 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=828119 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 828119 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 828119 ']' 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:34.753 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:34.753 [2024-11-25 13:05:14.149402] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:30:34.753 [2024-11-25 13:05:14.149472] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.754 [2024-11-25 13:05:14.257131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:34.754 [2024-11-25 13:05:14.310142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.754 [2024-11-25 13:05:14.310193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.754 [2024-11-25 13:05:14.310201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.754 [2024-11-25 13:05:14.310213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.754 [2024-11-25 13:05:14.310219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.754 [2024-11-25 13:05:14.312220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.754 [2024-11-25 13:05:14.312387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.754 [2024-11-25 13:05:14.312387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.325 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.325 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:35.325 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:35.325 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:35.325 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.325 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.325 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.325 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.325 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.325 [2024-11-25 13:05:14.995526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.325 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.325 13:05:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:35.325 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.325 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.325 Malloc0 00:30:35.325 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.325 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:35.325 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.325 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:35.326 [2024-11-25 13:05:15.073447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:35.326 { 00:30:35.326 "params": { 00:30:35.326 "name": "Nvme$subsystem", 00:30:35.326 "trtype": "$TEST_TRANSPORT", 00:30:35.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.326 "adrfam": "ipv4", 00:30:35.326 "trsvcid": "$NVMF_PORT", 00:30:35.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.326 "hdgst": ${hdgst:-false}, 00:30:35.326 "ddgst": ${ddgst:-false} 00:30:35.326 }, 00:30:35.326 "method": "bdev_nvme_attach_controller" 00:30:35.326 } 00:30:35.326 EOF 00:30:35.326 )") 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:35.326 13:05:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:35.326 "params": { 00:30:35.326 "name": "Nvme1", 00:30:35.326 "trtype": "tcp", 00:30:35.326 "traddr": "10.0.0.2", 00:30:35.326 "adrfam": "ipv4", 00:30:35.326 "trsvcid": "4420", 00:30:35.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.326 "hdgst": false, 00:30:35.326 "ddgst": false 00:30:35.326 }, 00:30:35.326 "method": "bdev_nvme_attach_controller" 00:30:35.326 }' 00:30:35.326 [2024-11-25 13:05:15.130621] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:30:35.326 [2024-11-25 13:05:15.130670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828378 ] 00:30:35.326 [2024-11-25 13:05:15.206945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.587 [2024-11-25 13:05:15.243024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.847 Running I/O for 1 seconds... 00:30:36.790 8828.00 IOPS, 34.48 MiB/s 00:30:36.790 Latency(us) 00:30:36.790 [2024-11-25T12:05:16.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.790 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:36.790 Verification LBA range: start 0x0 length 0x4000 00:30:36.790 Nvme1n1 : 1.01 8904.37 34.78 0.00 0.00 14291.82 1092.27 18896.21 00:30:36.790 [2024-11-25T12:05:16.693Z] =================================================================================================================== 00:30:36.790 [2024-11-25T12:05:16.693Z] Total : 8904.37 34.78 0.00 0.00 14291.82 1092.27 18896.21 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=828713 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:36.790 { 00:30:36.790 "params": { 00:30:36.790 "name": "Nvme$subsystem", 00:30:36.790 "trtype": "$TEST_TRANSPORT", 00:30:36.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.790 "adrfam": "ipv4", 00:30:36.790 "trsvcid": "$NVMF_PORT", 00:30:36.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.790 "hdgst": ${hdgst:-false}, 00:30:36.790 "ddgst": ${ddgst:-false} 00:30:36.790 }, 00:30:36.790 "method": "bdev_nvme_attach_controller" 00:30:36.790 } 00:30:36.790 EOF 00:30:36.790 )") 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:36.790 13:05:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:36.790 "params": { 00:30:36.790 "name": "Nvme1", 00:30:36.790 "trtype": "tcp", 00:30:36.790 "traddr": "10.0.0.2", 00:30:36.790 "adrfam": "ipv4", 00:30:36.790 "trsvcid": "4420", 00:30:36.790 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:36.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:36.790 "hdgst": false, 00:30:36.790 "ddgst": false 00:30:36.790 }, 00:30:36.790 "method": "bdev_nvme_attach_controller" 00:30:36.790 }' 00:30:37.052 [2024-11-25 13:05:16.720417] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:30:37.052 [2024-11-25 13:05:16.720472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828713 ] 00:30:37.052 [2024-11-25 13:05:16.799687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.052 [2024-11-25 13:05:16.835145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.312 Running I/O for 15 seconds... 00:30:39.196 10240.00 IOPS, 40.00 MiB/s [2024-11-25T12:05:20.050Z] 10801.00 IOPS, 42.19 MiB/s [2024-11-25T12:05:20.050Z] 13:05:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 828119 00:30:40.147 13:05:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:40.147 [2024-11-25 13:05:19.685514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.147 [2024-11-25 13:05:19.685559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.147 [2024-11-25 13:05:19.685580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.147 [2024-11-25 13:05:19.685590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.147 [2024-11-25 13:05:19.685601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.147 [2024-11-25 13:05:19.685608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.147 [2024-11-25 13:05:19.685618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.147 [2024-11-25 13:05:19.685629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.147 [2024-11-25 13:05:19.685643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.147 [2024-11-25 13:05:19.685650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.147 [2024-11-25 13:05:19.685660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.147 [2024-11-25 13:05:19.685668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.147 [2024-11-25 13:05:19.685677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.147 [2024-11-25 13:05:19.685685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.685984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.685993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.148 [2024-11-25 13:05:19.686348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.148 [2024-11-25 13:05:19.686357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.149 [2024-11-25 13:05:19.686800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.686989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.686998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.687005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.149 [2024-11-25 13:05:19.687015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.149 [2024-11-25 13:05:19.687022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.150 [2024-11-25 13:05:19.687039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.150 [2024-11-25 13:05:19.687061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.150 [2024-11-25 13:05:19.687078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.150 [2024-11-25 13:05:19.687095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.150 [2024-11-25 13:05:19.687111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.150 [2024-11-25 13:05:19.687129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.150 [2024-11-25 13:05:19.687146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.150 [2024-11-25 13:05:19.687162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.150 [2024-11-25 13:05:19.687180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.150 [2024-11-25 13:05:19.687197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.150 [2024-11-25 13:05:19.687678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.150 [2024-11-25 13:05:19.687685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.151 [2024-11-25 13:05:19.687695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.151 [2024-11-25 13:05:19.687703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.151 [2024-11-25 13:05:19.687712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.151 [2024-11-25 13:05:19.687719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.151 [2024-11-25 13:05:19.687729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.151 [2024-11-25 13:05:19.687737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.151 [2024-11-25 13:05:19.687746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:40.151 [2024-11-25 13:05:19.687753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.151 [2024-11-25 13:05:19.687762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d333f0 is same with the state(6) to be set 00:30:40.151 [2024-11-25 13:05:19.687771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:40.151 [2024-11-25 13:05:19.687778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:40.151 [2024-11-25 13:05:19.687785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104944 len:8 PRP1 0x0 PRP2 0x0 00:30:40.151 [2024-11-25 13:05:19.687792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.151 [2024-11-25 13:05:19.687870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.151 [2024-11-25 13:05:19.687882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.151 [2024-11-25 13:05:19.687890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.151 [2024-11-25 13:05:19.687898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.151 [2024-11-25 13:05:19.687906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.151 [2024-11-25 13:05:19.687913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.151 [2024-11-25 13:05:19.687922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:40.151 [2024-11-25 13:05:19.687929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:40.151 [2024-11-25 13:05:19.687936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.151 [2024-11-25 13:05:19.691414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.151 [2024-11-25 13:05:19.691434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.151 [2024-11-25 13:05:19.692344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.151 [2024-11-25 13:05:19.692382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.151 [2024-11-25 13:05:19.692393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.151 [2024-11-25 13:05:19.692638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.151 [2024-11-25 13:05:19.692882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.151 [2024-11-25 13:05:19.692892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.151 [2024-11-25 13:05:19.692901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.151 [2024-11-25 13:05:19.692911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.151 [2024-11-25 13:05:19.705463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.151 [2024-11-25 13:05:19.706164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.151 [2024-11-25 13:05:19.706203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.151 [2024-11-25 13:05:19.706216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.151 [2024-11-25 13:05:19.706457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.151 [2024-11-25 13:05:19.706680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.151 [2024-11-25 13:05:19.706689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.151 [2024-11-25 13:05:19.706698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.151 [2024-11-25 13:05:19.706707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.151 [2024-11-25 13:05:19.719266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.151 [2024-11-25 13:05:19.719962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.151 [2024-11-25 13:05:19.720000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.151 [2024-11-25 13:05:19.720013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.151 [2024-11-25 13:05:19.720255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.151 [2024-11-25 13:05:19.720478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.151 [2024-11-25 13:05:19.720487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.151 [2024-11-25 13:05:19.720496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.151 [2024-11-25 13:05:19.720504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.151 [2024-11-25 13:05:19.733070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.151 [2024-11-25 13:05:19.733683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.151 [2024-11-25 13:05:19.733721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.151 [2024-11-25 13:05:19.733733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.151 [2024-11-25 13:05:19.733979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.151 [2024-11-25 13:05:19.734204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.151 [2024-11-25 13:05:19.734217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.151 [2024-11-25 13:05:19.734225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.151 [2024-11-25 13:05:19.734234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.151 [2024-11-25 13:05:19.746985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.151 [2024-11-25 13:05:19.747545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.151 [2024-11-25 13:05:19.747564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.151 [2024-11-25 13:05:19.747573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.151 [2024-11-25 13:05:19.747792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.151 [2024-11-25 13:05:19.748017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.151 [2024-11-25 13:05:19.748026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.151 [2024-11-25 13:05:19.748033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.151 [2024-11-25 13:05:19.748041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.151 [2024-11-25 13:05:19.760777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.151 [2024-11-25 13:05:19.761438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.151 [2024-11-25 13:05:19.761476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.151 [2024-11-25 13:05:19.761488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.151 [2024-11-25 13:05:19.761726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.151 [2024-11-25 13:05:19.761958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.151 [2024-11-25 13:05:19.761968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.151 [2024-11-25 13:05:19.761976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.151 [2024-11-25 13:05:19.761984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.151 [2024-11-25 13:05:19.774737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.151 [2024-11-25 13:05:19.775364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.151 [2024-11-25 13:05:19.775402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.151 [2024-11-25 13:05:19.775413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.151 [2024-11-25 13:05:19.775652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.151 [2024-11-25 13:05:19.775884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.151 [2024-11-25 13:05:19.775894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.151 [2024-11-25 13:05:19.775902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.151 [2024-11-25 13:05:19.775910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.151 [2024-11-25 13:05:19.788666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.151 [2024-11-25 13:05:19.789350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.152 [2024-11-25 13:05:19.789388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.152 [2024-11-25 13:05:19.789400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.152 [2024-11-25 13:05:19.789638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.152 [2024-11-25 13:05:19.789870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.152 [2024-11-25 13:05:19.789880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.152 [2024-11-25 13:05:19.789889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.152 [2024-11-25 13:05:19.789897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.152 [2024-11-25 13:05:19.802457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.152 [2024-11-25 13:05:19.803122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.152 [2024-11-25 13:05:19.803160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.152 [2024-11-25 13:05:19.803171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.152 [2024-11-25 13:05:19.803409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.152 [2024-11-25 13:05:19.803632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.152 [2024-11-25 13:05:19.803641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.152 [2024-11-25 13:05:19.803649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.152 [2024-11-25 13:05:19.803657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.152 [2024-11-25 13:05:19.816419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.152 [2024-11-25 13:05:19.817108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.152 [2024-11-25 13:05:19.817147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.152 [2024-11-25 13:05:19.817158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.152 [2024-11-25 13:05:19.817397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.152 [2024-11-25 13:05:19.817620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.152 [2024-11-25 13:05:19.817629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.152 [2024-11-25 13:05:19.817637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.152 [2024-11-25 13:05:19.817645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.152 [2024-11-25 13:05:19.830218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.152 [2024-11-25 13:05:19.830815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.152 [2024-11-25 13:05:19.830838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.152 [2024-11-25 13:05:19.830846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.152 [2024-11-25 13:05:19.831072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.152 [2024-11-25 13:05:19.831292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.152 [2024-11-25 13:05:19.831300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.152 [2024-11-25 13:05:19.831308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.152 [2024-11-25 13:05:19.831314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.152 [2024-11-25 13:05:19.844060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.152 [2024-11-25 13:05:19.844714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.152 [2024-11-25 13:05:19.844752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.152 [2024-11-25 13:05:19.844763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.152 [2024-11-25 13:05:19.845012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.152 [2024-11-25 13:05:19.845236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.152 [2024-11-25 13:05:19.845246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.152 [2024-11-25 13:05:19.845254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.152 [2024-11-25 13:05:19.845262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.152 [2024-11-25 13:05:19.858025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.152 [2024-11-25 13:05:19.858559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.152 [2024-11-25 13:05:19.858597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.152 [2024-11-25 13:05:19.858608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.152 [2024-11-25 13:05:19.858847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.152 [2024-11-25 13:05:19.859078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.152 [2024-11-25 13:05:19.859087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.152 [2024-11-25 13:05:19.859096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.152 [2024-11-25 13:05:19.859104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.152 [2024-11-25 13:05:19.871860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.152 [2024-11-25 13:05:19.872442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.152 [2024-11-25 13:05:19.872462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.152 [2024-11-25 13:05:19.872469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.152 [2024-11-25 13:05:19.872693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.152 [2024-11-25 13:05:19.872918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.152 [2024-11-25 13:05:19.872927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.152 [2024-11-25 13:05:19.872935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.152 [2024-11-25 13:05:19.872941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.152 [2024-11-25 13:05:19.885690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.152 [2024-11-25 13:05:19.886240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.152 [2024-11-25 13:05:19.886257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.152 [2024-11-25 13:05:19.886265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.152 [2024-11-25 13:05:19.886484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.152 [2024-11-25 13:05:19.886703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.152 [2024-11-25 13:05:19.886711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.152 [2024-11-25 13:05:19.886718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.152 [2024-11-25 13:05:19.886725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.152 [2024-11-25 13:05:19.899478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.152 [2024-11-25 13:05:19.900122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.153 [2024-11-25 13:05:19.900160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.153 [2024-11-25 13:05:19.900171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.153 [2024-11-25 13:05:19.900410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.153 [2024-11-25 13:05:19.900634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.153 [2024-11-25 13:05:19.900643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.153 [2024-11-25 13:05:19.900650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.153 [2024-11-25 13:05:19.900658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.153 [2024-11-25 13:05:19.913416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.153 [2024-11-25 13:05:19.913963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.153 [2024-11-25 13:05:19.914001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.153 [2024-11-25 13:05:19.914013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.153 [2024-11-25 13:05:19.914253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.153 [2024-11-25 13:05:19.914475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.153 [2024-11-25 13:05:19.914485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.153 [2024-11-25 13:05:19.914497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.153 [2024-11-25 13:05:19.914505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.153 [2024-11-25 13:05:19.927267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.153 [2024-11-25 13:05:19.927948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.153 [2024-11-25 13:05:19.927986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.153 [2024-11-25 13:05:19.927999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.153 [2024-11-25 13:05:19.928241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.153 [2024-11-25 13:05:19.928464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.153 [2024-11-25 13:05:19.928473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.153 [2024-11-25 13:05:19.928481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.153 [2024-11-25 13:05:19.928489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.153 [2024-11-25 13:05:19.941250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.153 [2024-11-25 13:05:19.941896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.153 [2024-11-25 13:05:19.941934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.153 [2024-11-25 13:05:19.941947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.153 [2024-11-25 13:05:19.942188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.153 [2024-11-25 13:05:19.942412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.153 [2024-11-25 13:05:19.942422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.153 [2024-11-25 13:05:19.942430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.153 [2024-11-25 13:05:19.942439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.153 [2024-11-25 13:05:19.955203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.153 [2024-11-25 13:05:19.955852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.153 [2024-11-25 13:05:19.955898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.153 [2024-11-25 13:05:19.955910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.153 [2024-11-25 13:05:19.956150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.153 [2024-11-25 13:05:19.956373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.153 [2024-11-25 13:05:19.956382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.153 [2024-11-25 13:05:19.956390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.153 [2024-11-25 13:05:19.956398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.153 [2024-11-25 13:05:19.969157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.153 [2024-11-25 13:05:19.969837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.153 [2024-11-25 13:05:19.969882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.153 [2024-11-25 13:05:19.969895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.153 [2024-11-25 13:05:19.970135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.153 [2024-11-25 13:05:19.970358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.153 [2024-11-25 13:05:19.970367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.153 [2024-11-25 13:05:19.970374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.153 [2024-11-25 13:05:19.970382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.153 [2024-11-25 13:05:19.983141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.153 [2024-11-25 13:05:19.983815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.153 [2024-11-25 13:05:19.983852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.153 [2024-11-25 13:05:19.983872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.153 [2024-11-25 13:05:19.984111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.153 [2024-11-25 13:05:19.984334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.153 [2024-11-25 13:05:19.984343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.153 [2024-11-25 13:05:19.984351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.153 [2024-11-25 13:05:19.984359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.153 9791.67 IOPS, 38.25 MiB/s [2024-11-25T12:05:20.056Z] [2024-11-25 13:05:19.997948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.153 [2024-11-25 13:05:19.998609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.153 [2024-11-25 13:05:19.998647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.153 [2024-11-25 13:05:19.998658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.153 [2024-11-25 13:05:19.998907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.153 [2024-11-25 13:05:19.999132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.153 [2024-11-25 13:05:19.999141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.153 [2024-11-25 13:05:19.999149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.153 [2024-11-25 13:05:19.999158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.153 [2024-11-25 13:05:20.011774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.153 [2024-11-25 13:05:20.012360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.153 [2024-11-25 13:05:20.012403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.153 [2024-11-25 13:05:20.012414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.153 [2024-11-25 13:05:20.012653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.153 [2024-11-25 13:05:20.012885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.153 [2024-11-25 13:05:20.012895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.153 [2024-11-25 13:05:20.012902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.153 [2024-11-25 13:05:20.012910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.153 [2024-11-25 13:05:20.025681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.153 [2024-11-25 13:05:20.026234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.153 [2024-11-25 13:05:20.026254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.153 [2024-11-25 13:05:20.026262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.153 [2024-11-25 13:05:20.026482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.153 [2024-11-25 13:05:20.026701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.153 [2024-11-25 13:05:20.026710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.153 [2024-11-25 13:05:20.026717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.153 [2024-11-25 13:05:20.026724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.153 [2024-11-25 13:05:20.039483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.153 [2024-11-25 13:05:20.039905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.154 [2024-11-25 13:05:20.039924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.154 [2024-11-25 13:05:20.039933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.154 [2024-11-25 13:05:20.040153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.154 [2024-11-25 13:05:20.040373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.154 [2024-11-25 13:05:20.040381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.154 [2024-11-25 13:05:20.040389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.154 [2024-11-25 13:05:20.040396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.416 [2024-11-25 13:05:20.053363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.416 [2024-11-25 13:05:20.053905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.416 [2024-11-25 13:05:20.053923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.416 [2024-11-25 13:05:20.053930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.416 [2024-11-25 13:05:20.054154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.416 [2024-11-25 13:05:20.054373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.416 [2024-11-25 13:05:20.054382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.416 [2024-11-25 13:05:20.054389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.416 [2024-11-25 13:05:20.054396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.416 [2024-11-25 13:05:20.067148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.416 [2024-11-25 13:05:20.067567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.416 [2024-11-25 13:05:20.067585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.416 [2024-11-25 13:05:20.067592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.416 [2024-11-25 13:05:20.067811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.416 [2024-11-25 13:05:20.068045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.416 [2024-11-25 13:05:20.068056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.416 [2024-11-25 13:05:20.068063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.416 [2024-11-25 13:05:20.068070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.416 [2024-11-25 13:05:20.081031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.416 [2024-11-25 13:05:20.081562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.416 [2024-11-25 13:05:20.081579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.416 [2024-11-25 13:05:20.081587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.416 [2024-11-25 13:05:20.081807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.416 [2024-11-25 13:05:20.082031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.416 [2024-11-25 13:05:20.082039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.416 [2024-11-25 13:05:20.082047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.416 [2024-11-25 13:05:20.082053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.416 [2024-11-25 13:05:20.095175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.416 [2024-11-25 13:05:20.095831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.416 [2024-11-25 13:05:20.095877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.416 [2024-11-25 13:05:20.095889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.416 [2024-11-25 13:05:20.096127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.416 [2024-11-25 13:05:20.096351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.416 [2024-11-25 13:05:20.096365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.416 [2024-11-25 13:05:20.096373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.416 [2024-11-25 13:05:20.096381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.416 [2024-11-25 13:05:20.109139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.416 [2024-11-25 13:05:20.109811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.416 [2024-11-25 13:05:20.109848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.417 [2024-11-25 13:05:20.109859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.417 [2024-11-25 13:05:20.110107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.417 [2024-11-25 13:05:20.110330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.417 [2024-11-25 13:05:20.110339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.417 [2024-11-25 13:05:20.110347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.417 [2024-11-25 13:05:20.110355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.417 [2024-11-25 13:05:20.123110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.417 [2024-11-25 13:05:20.123789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.417 [2024-11-25 13:05:20.123827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.417 [2024-11-25 13:05:20.123840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.417 [2024-11-25 13:05:20.124089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.417 [2024-11-25 13:05:20.124313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.417 [2024-11-25 13:05:20.124322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.417 [2024-11-25 13:05:20.124331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.417 [2024-11-25 13:05:20.124339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.417 [2024-11-25 13:05:20.136899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.417 [2024-11-25 13:05:20.137544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.417 [2024-11-25 13:05:20.137582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.417 [2024-11-25 13:05:20.137593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.417 [2024-11-25 13:05:20.137832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.417 [2024-11-25 13:05:20.138065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.417 [2024-11-25 13:05:20.138075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.417 [2024-11-25 13:05:20.138083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.417 [2024-11-25 13:05:20.138091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.417 [2024-11-25 13:05:20.150855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.417 [2024-11-25 13:05:20.151538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.417 [2024-11-25 13:05:20.151576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.417 [2024-11-25 13:05:20.151587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.417 [2024-11-25 13:05:20.151826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.417 [2024-11-25 13:05:20.152058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.417 [2024-11-25 13:05:20.152068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.417 [2024-11-25 13:05:20.152077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.417 [2024-11-25 13:05:20.152085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.417 [2024-11-25 13:05:20.164830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.417 [2024-11-25 13:05:20.165393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.417 [2024-11-25 13:05:20.165431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.417 [2024-11-25 13:05:20.165442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.417 [2024-11-25 13:05:20.165680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.417 [2024-11-25 13:05:20.165915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.417 [2024-11-25 13:05:20.165924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.417 [2024-11-25 13:05:20.165932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.417 [2024-11-25 13:05:20.165940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.417 [2024-11-25 13:05:20.178689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.417 [2024-11-25 13:05:20.179253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.417 [2024-11-25 13:05:20.179273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.417 [2024-11-25 13:05:20.179281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.417 [2024-11-25 13:05:20.179500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.417 [2024-11-25 13:05:20.179718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.417 [2024-11-25 13:05:20.179727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.417 [2024-11-25 13:05:20.179734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.417 [2024-11-25 13:05:20.179740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.417 [2024-11-25 13:05:20.192492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.417 [2024-11-25 13:05:20.193064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.417 [2024-11-25 13:05:20.193085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.417 [2024-11-25 13:05:20.193093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.417 [2024-11-25 13:05:20.193312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.417 [2024-11-25 13:05:20.193531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.417 [2024-11-25 13:05:20.193539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.417 [2024-11-25 13:05:20.193547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.417 [2024-11-25 13:05:20.193554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.417 [2024-11-25 13:05:20.206315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.417 [2024-11-25 13:05:20.206849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.417 [2024-11-25 13:05:20.206870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.417 [2024-11-25 13:05:20.206879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.417 [2024-11-25 13:05:20.207099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.417 [2024-11-25 13:05:20.207317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.417 [2024-11-25 13:05:20.207326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.417 [2024-11-25 13:05:20.207334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.417 [2024-11-25 13:05:20.207341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.417 [2024-11-25 13:05:20.220294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.417 [2024-11-25 13:05:20.220932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.418 [2024-11-25 13:05:20.220970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.418 [2024-11-25 13:05:20.220983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.418 [2024-11-25 13:05:20.221224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.418 [2024-11-25 13:05:20.221448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.418 [2024-11-25 13:05:20.221464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.418 [2024-11-25 13:05:20.221472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.418 [2024-11-25 13:05:20.221480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.418 [2024-11-25 13:05:20.234255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.418 [2024-11-25 13:05:20.234937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.418 [2024-11-25 13:05:20.234975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.418 [2024-11-25 13:05:20.234985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.418 [2024-11-25 13:05:20.235229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.418 [2024-11-25 13:05:20.235452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.418 [2024-11-25 13:05:20.235461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.418 [2024-11-25 13:05:20.235469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.418 [2024-11-25 13:05:20.235477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.418 [2024-11-25 13:05:20.248235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.418 [2024-11-25 13:05:20.248939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.418 [2024-11-25 13:05:20.248977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.418 [2024-11-25 13:05:20.248989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.418 [2024-11-25 13:05:20.249232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.418 [2024-11-25 13:05:20.249455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.418 [2024-11-25 13:05:20.249465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.418 [2024-11-25 13:05:20.249472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.418 [2024-11-25 13:05:20.249480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.418 [2024-11-25 13:05:20.262039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.418 [2024-11-25 13:05:20.262701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.418 [2024-11-25 13:05:20.262738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.418 [2024-11-25 13:05:20.262749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.418 [2024-11-25 13:05:20.262997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.418 [2024-11-25 13:05:20.263221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.418 [2024-11-25 13:05:20.263230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.418 [2024-11-25 13:05:20.263238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.418 [2024-11-25 13:05:20.263246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.418 [2024-11-25 13:05:20.275989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.418 [2024-11-25 13:05:20.276536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.418 [2024-11-25 13:05:20.276555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.418 [2024-11-25 13:05:20.276564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.418 [2024-11-25 13:05:20.276783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.418 [2024-11-25 13:05:20.277008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.418 [2024-11-25 13:05:20.277022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.418 [2024-11-25 13:05:20.277029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.418 [2024-11-25 13:05:20.277036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.418 [2024-11-25 13:05:20.289773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.418 [2024-11-25 13:05:20.290330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.418 [2024-11-25 13:05:20.290347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.418 [2024-11-25 13:05:20.290355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.418 [2024-11-25 13:05:20.290573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.418 [2024-11-25 13:05:20.290791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.418 [2024-11-25 13:05:20.290800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.418 [2024-11-25 13:05:20.290807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.418 [2024-11-25 13:05:20.290814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.418 [2024-11-25 13:05:20.303568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.418 [2024-11-25 13:05:20.304106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.418 [2024-11-25 13:05:20.304123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.418 [2024-11-25 13:05:20.304131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.418 [2024-11-25 13:05:20.304349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.418 [2024-11-25 13:05:20.304568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.418 [2024-11-25 13:05:20.304576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.418 [2024-11-25 13:05:20.304583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.418 [2024-11-25 13:05:20.304589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.681 [2024-11-25 13:05:20.317541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.681 [2024-11-25 13:05:20.318075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-25 13:05:20.318091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.681 [2024-11-25 13:05:20.318099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.681 [2024-11-25 13:05:20.318317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.681 [2024-11-25 13:05:20.318535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.681 [2024-11-25 13:05:20.318544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.681 [2024-11-25 13:05:20.318551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.681 [2024-11-25 13:05:20.318558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.681 [2024-11-25 13:05:20.331521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.681 [2024-11-25 13:05:20.332160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-25 13:05:20.332197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.681 [2024-11-25 13:05:20.332208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.681 [2024-11-25 13:05:20.332447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.681 [2024-11-25 13:05:20.332670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.681 [2024-11-25 13:05:20.332679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.681 [2024-11-25 13:05:20.332687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.681 [2024-11-25 13:05:20.332695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.681 [2024-11-25 13:05:20.345462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.681 [2024-11-25 13:05:20.346198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-25 13:05:20.346236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.681 [2024-11-25 13:05:20.346247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.681 [2024-11-25 13:05:20.346486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.681 [2024-11-25 13:05:20.346709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.681 [2024-11-25 13:05:20.346717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.681 [2024-11-25 13:05:20.346725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.681 [2024-11-25 13:05:20.346733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.681 [2024-11-25 13:05:20.359291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.681 [2024-11-25 13:05:20.359961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-25 13:05:20.359999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.681 [2024-11-25 13:05:20.360011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.681 [2024-11-25 13:05:20.360252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.681 [2024-11-25 13:05:20.360475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.681 [2024-11-25 13:05:20.360484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.681 [2024-11-25 13:05:20.360492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.681 [2024-11-25 13:05:20.360499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.681 [2024-11-25 13:05:20.373258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.681 [2024-11-25 13:05:20.373962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-25 13:05:20.374005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.681 [2024-11-25 13:05:20.374018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.681 [2024-11-25 13:05:20.374258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.681 [2024-11-25 13:05:20.374481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.681 [2024-11-25 13:05:20.374490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.681 [2024-11-25 13:05:20.374497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.681 [2024-11-25 13:05:20.374505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.681 [2024-11-25 13:05:20.387057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.681 [2024-11-25 13:05:20.387750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-25 13:05:20.387788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.681 [2024-11-25 13:05:20.387800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.681 [2024-11-25 13:05:20.388052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.681 [2024-11-25 13:05:20.388275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.681 [2024-11-25 13:05:20.388284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.681 [2024-11-25 13:05:20.388292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.681 [2024-11-25 13:05:20.388300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.681 [2024-11-25 13:05:20.400852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.681 [2024-11-25 13:05:20.401482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-25 13:05:20.401519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.681 [2024-11-25 13:05:20.401532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.681 [2024-11-25 13:05:20.401772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.681 [2024-11-25 13:05:20.402004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.681 [2024-11-25 13:05:20.402014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.681 [2024-11-25 13:05:20.402022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.681 [2024-11-25 13:05:20.402030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.681 [2024-11-25 13:05:20.414775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.681 [2024-11-25 13:05:20.415417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-25 13:05:20.415455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.681 [2024-11-25 13:05:20.415466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.681 [2024-11-25 13:05:20.415709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.681 [2024-11-25 13:05:20.415941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.681 [2024-11-25 13:05:20.415952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.681 [2024-11-25 13:05:20.415959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.681 [2024-11-25 13:05:20.415967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.681 [2024-11-25 13:05:20.428733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.681 [2024-11-25 13:05:20.429373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-25 13:05:20.429412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.681 [2024-11-25 13:05:20.429422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.681 [2024-11-25 13:05:20.429661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.681 [2024-11-25 13:05:20.429895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.681 [2024-11-25 13:05:20.429905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.681 [2024-11-25 13:05:20.429913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.681 [2024-11-25 13:05:20.429921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.681 [2024-11-25 13:05:20.442676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.681 [2024-11-25 13:05:20.443296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.681 [2024-11-25 13:05:20.443334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.682 [2024-11-25 13:05:20.443345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.682 [2024-11-25 13:05:20.443585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.682 [2024-11-25 13:05:20.443807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.682 [2024-11-25 13:05:20.443817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.682 [2024-11-25 13:05:20.443826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.682 [2024-11-25 13:05:20.443833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.682 [2024-11-25 13:05:20.456591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.682 [2024-11-25 13:05:20.457268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-25 13:05:20.457306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.682 [2024-11-25 13:05:20.457317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.682 [2024-11-25 13:05:20.457555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.682 [2024-11-25 13:05:20.457778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.682 [2024-11-25 13:05:20.457792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.682 [2024-11-25 13:05:20.457801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.682 [2024-11-25 13:05:20.457810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.682 [2024-11-25 13:05:20.470573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.682 [2024-11-25 13:05:20.471156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-25 13:05:20.471176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.682 [2024-11-25 13:05:20.471183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.682 [2024-11-25 13:05:20.471402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.682 [2024-11-25 13:05:20.471621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.682 [2024-11-25 13:05:20.471629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.682 [2024-11-25 13:05:20.471637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.682 [2024-11-25 13:05:20.471643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.682 [2024-11-25 13:05:20.484394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.682 [2024-11-25 13:05:20.485088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-25 13:05:20.485126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.682 [2024-11-25 13:05:20.485137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.682 [2024-11-25 13:05:20.485376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.682 [2024-11-25 13:05:20.485599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.682 [2024-11-25 13:05:20.485608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.682 [2024-11-25 13:05:20.485616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.682 [2024-11-25 13:05:20.485623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.682 [2024-11-25 13:05:20.498398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.682 [2024-11-25 13:05:20.498988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-25 13:05:20.499026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.682 [2024-11-25 13:05:20.499038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.682 [2024-11-25 13:05:20.499280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.682 [2024-11-25 13:05:20.499503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.682 [2024-11-25 13:05:20.499513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.682 [2024-11-25 13:05:20.499520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.682 [2024-11-25 13:05:20.499528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.682 [2024-11-25 13:05:20.512290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.682 [2024-11-25 13:05:20.512979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-25 13:05:20.513017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.682 [2024-11-25 13:05:20.513030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.682 [2024-11-25 13:05:20.513272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.682 [2024-11-25 13:05:20.513495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.682 [2024-11-25 13:05:20.513505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.682 [2024-11-25 13:05:20.513512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.682 [2024-11-25 13:05:20.513520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.682 [2024-11-25 13:05:20.526294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.682 [2024-11-25 13:05:20.526830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-25 13:05:20.526874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.682 [2024-11-25 13:05:20.526887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.682 [2024-11-25 13:05:20.527126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.682 [2024-11-25 13:05:20.527349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.682 [2024-11-25 13:05:20.527357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.682 [2024-11-25 13:05:20.527365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.682 [2024-11-25 13:05:20.527373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.682 [2024-11-25 13:05:20.540132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.682 [2024-11-25 13:05:20.540814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-25 13:05:20.540852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.682 [2024-11-25 13:05:20.540872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.682 [2024-11-25 13:05:20.541111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.682 [2024-11-25 13:05:20.541335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.682 [2024-11-25 13:05:20.541343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.682 [2024-11-25 13:05:20.541351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.682 [2024-11-25 13:05:20.541359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.682 [2024-11-25 13:05:20.554112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.682 [2024-11-25 13:05:20.554787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-25 13:05:20.554828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.682 [2024-11-25 13:05:20.554840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.682 [2024-11-25 13:05:20.555088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.682 [2024-11-25 13:05:20.555312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.682 [2024-11-25 13:05:20.555321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.682 [2024-11-25 13:05:20.555328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.682 [2024-11-25 13:05:20.555336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.682 [2024-11-25 13:05:20.568097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.682 [2024-11-25 13:05:20.568700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.682 [2024-11-25 13:05:20.568738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.682 [2024-11-25 13:05:20.568749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.682 [2024-11-25 13:05:20.568996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.682 [2024-11-25 13:05:20.569220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.682 [2024-11-25 13:05:20.569229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.682 [2024-11-25 13:05:20.569237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.682 [2024-11-25 13:05:20.569245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.945 [2024-11-25 13:05:20.582002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.945 [2024-11-25 13:05:20.582662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.945 [2024-11-25 13:05:20.582699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.945 [2024-11-25 13:05:20.582710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.945 [2024-11-25 13:05:20.582958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.945 [2024-11-25 13:05:20.583182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.945 [2024-11-25 13:05:20.583191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.945 [2024-11-25 13:05:20.583199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.945 [2024-11-25 13:05:20.583207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.945 [2024-11-25 13:05:20.595966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.945 [2024-11-25 13:05:20.596595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.945 [2024-11-25 13:05:20.596633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.945 [2024-11-25 13:05:20.596644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.945 [2024-11-25 13:05:20.596897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.945 [2024-11-25 13:05:20.597132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.945 [2024-11-25 13:05:20.597142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.945 [2024-11-25 13:05:20.597150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.945 [2024-11-25 13:05:20.597158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.945 [2024-11-25 13:05:20.609916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.945 [2024-11-25 13:05:20.610554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.945 [2024-11-25 13:05:20.610592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.945 [2024-11-25 13:05:20.610603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.945 [2024-11-25 13:05:20.610841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.945 [2024-11-25 13:05:20.611075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.945 [2024-11-25 13:05:20.611085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.945 [2024-11-25 13:05:20.611093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.945 [2024-11-25 13:05:20.611101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.945 [2024-11-25 13:05:20.623854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.945 [2024-11-25 13:05:20.624506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.945 [2024-11-25 13:05:20.624544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.945 [2024-11-25 13:05:20.624556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.945 [2024-11-25 13:05:20.624799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.945 [2024-11-25 13:05:20.625038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.945 [2024-11-25 13:05:20.625049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.945 [2024-11-25 13:05:20.625056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.945 [2024-11-25 13:05:20.625064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.945 [2024-11-25 13:05:20.637818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.945 [2024-11-25 13:05:20.638373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.945 [2024-11-25 13:05:20.638393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.945 [2024-11-25 13:05:20.638401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.945 [2024-11-25 13:05:20.638620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.945 [2024-11-25 13:05:20.638839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.945 [2024-11-25 13:05:20.638852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.945 [2024-11-25 13:05:20.638859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.945 [2024-11-25 13:05:20.638872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.945 [2024-11-25 13:05:20.651608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.945 [2024-11-25 13:05:20.652149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.945 [2024-11-25 13:05:20.652167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.945 [2024-11-25 13:05:20.652174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.945 [2024-11-25 13:05:20.652393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.945 [2024-11-25 13:05:20.652611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.945 [2024-11-25 13:05:20.652619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.945 [2024-11-25 13:05:20.652627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.945 [2024-11-25 13:05:20.652633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.945 [2024-11-25 13:05:20.665585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.945 [2024-11-25 13:05:20.666162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.945 [2024-11-25 13:05:20.666199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.945 [2024-11-25 13:05:20.666211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.945 [2024-11-25 13:05:20.666449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.945 [2024-11-25 13:05:20.666672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.945 [2024-11-25 13:05:20.666681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.945 [2024-11-25 13:05:20.666688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.945 [2024-11-25 13:05:20.666696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.945 [2024-11-25 13:05:20.679455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.945 [2024-11-25 13:05:20.680172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.945 [2024-11-25 13:05:20.680210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.945 [2024-11-25 13:05:20.680221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.945 [2024-11-25 13:05:20.680460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.945 [2024-11-25 13:05:20.680683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.945 [2024-11-25 13:05:20.680692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.945 [2024-11-25 13:05:20.680699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.945 [2024-11-25 13:05:20.680707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.945 [2024-11-25 13:05:20.693265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.945 [2024-11-25 13:05:20.693957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.945 [2024-11-25 13:05:20.693995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.945 [2024-11-25 13:05:20.694008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.945 [2024-11-25 13:05:20.694249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.945 [2024-11-25 13:05:20.694473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.946 [2024-11-25 13:05:20.694490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.946 [2024-11-25 13:05:20.694499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.946 [2024-11-25 13:05:20.694508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.946 [2024-11-25 13:05:20.707074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.946 [2024-11-25 13:05:20.707618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.946 [2024-11-25 13:05:20.707638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.946 [2024-11-25 13:05:20.707645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.946 [2024-11-25 13:05:20.707872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.946 [2024-11-25 13:05:20.708092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.946 [2024-11-25 13:05:20.708100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.946 [2024-11-25 13:05:20.708108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.946 [2024-11-25 13:05:20.708115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.946 [2024-11-25 13:05:20.720963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.946 [2024-11-25 13:05:20.721538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.946 [2024-11-25 13:05:20.721576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.946 [2024-11-25 13:05:20.721588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.946 [2024-11-25 13:05:20.721828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.946 [2024-11-25 13:05:20.722062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.946 [2024-11-25 13:05:20.722072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.946 [2024-11-25 13:05:20.722080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.946 [2024-11-25 13:05:20.722088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.946 [2024-11-25 13:05:20.734906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.946 [2024-11-25 13:05:20.735579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.946 [2024-11-25 13:05:20.735621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.946 [2024-11-25 13:05:20.735633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.946 [2024-11-25 13:05:20.735882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.946 [2024-11-25 13:05:20.736106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.946 [2024-11-25 13:05:20.736115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.946 [2024-11-25 13:05:20.736122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.946 [2024-11-25 13:05:20.736131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.946 [2024-11-25 13:05:20.748891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.946 [2024-11-25 13:05:20.749423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.946 [2024-11-25 13:05:20.749460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.946 [2024-11-25 13:05:20.749471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.946 [2024-11-25 13:05:20.749710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.946 [2024-11-25 13:05:20.749943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.946 [2024-11-25 13:05:20.749953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.946 [2024-11-25 13:05:20.749961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.946 [2024-11-25 13:05:20.749969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.946 [2024-11-25 13:05:20.762722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.946 [2024-11-25 13:05:20.763417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.946 [2024-11-25 13:05:20.763455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.946 [2024-11-25 13:05:20.763466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.946 [2024-11-25 13:05:20.763704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.946 [2024-11-25 13:05:20.763936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.946 [2024-11-25 13:05:20.763945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.946 [2024-11-25 13:05:20.763953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.946 [2024-11-25 13:05:20.763961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.946 [2024-11-25 13:05:20.776511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.946 [2024-11-25 13:05:20.777174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.946 [2024-11-25 13:05:20.777212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.946 [2024-11-25 13:05:20.777223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.946 [2024-11-25 13:05:20.777466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.946 [2024-11-25 13:05:20.777689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.946 [2024-11-25 13:05:20.777699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.946 [2024-11-25 13:05:20.777706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.946 [2024-11-25 13:05:20.777714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.946 [2024-11-25 13:05:20.790476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.946 [2024-11-25 13:05:20.791119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.946 [2024-11-25 13:05:20.791156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.946 [2024-11-25 13:05:20.791167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.946 [2024-11-25 13:05:20.791406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.946 [2024-11-25 13:05:20.791629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.946 [2024-11-25 13:05:20.791638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.946 [2024-11-25 13:05:20.791646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.946 [2024-11-25 13:05:20.791654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.946 [2024-11-25 13:05:20.804429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.946 [2024-11-25 13:05:20.805081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.946 [2024-11-25 13:05:20.805119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.946 [2024-11-25 13:05:20.805130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.946 [2024-11-25 13:05:20.805368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.946 [2024-11-25 13:05:20.805591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.946 [2024-11-25 13:05:20.805600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.946 [2024-11-25 13:05:20.805608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.946 [2024-11-25 13:05:20.805616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.946 [2024-11-25 13:05:20.818372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.946 [2024-11-25 13:05:20.819070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.946 [2024-11-25 13:05:20.819109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.946 [2024-11-25 13:05:20.819119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.946 [2024-11-25 13:05:20.819358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.946 [2024-11-25 13:05:20.819581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.946 [2024-11-25 13:05:20.819597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.946 [2024-11-25 13:05:20.819605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.946 [2024-11-25 13:05:20.819613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.946 [2024-11-25 13:05:20.832182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:40.946 [2024-11-25 13:05:20.832850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:40.946 [2024-11-25 13:05:20.832896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:40.946 [2024-11-25 13:05:20.832907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:40.946 [2024-11-25 13:05:20.833146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:40.947 [2024-11-25 13:05:20.833369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:40.947 [2024-11-25 13:05:20.833378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:40.947 [2024-11-25 13:05:20.833385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:40.947 [2024-11-25 13:05:20.833393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:40.947 [2024-11-25 13:05:20.846154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.209 [2024-11-25 13:05:20.846850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.209 [2024-11-25 13:05:20.846896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.209 [2024-11-25 13:05:20.846907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.209 [2024-11-25 13:05:20.847146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.209 [2024-11-25 13:05:20.847371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.209 [2024-11-25 13:05:20.847380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.209 [2024-11-25 13:05:20.847388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.209 [2024-11-25 13:05:20.847396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.209 [2024-11-25 13:05:20.859955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.209 [2024-11-25 13:05:20.860616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.209 [2024-11-25 13:05:20.860653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.209 [2024-11-25 13:05:20.860664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.209 [2024-11-25 13:05:20.860913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.209 [2024-11-25 13:05:20.861137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.209 [2024-11-25 13:05:20.861146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.209 [2024-11-25 13:05:20.861153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.209 [2024-11-25 13:05:20.861162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.209 [2024-11-25 13:05:20.873924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.209 [2024-11-25 13:05:20.874577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.209 [2024-11-25 13:05:20.874615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.209 [2024-11-25 13:05:20.874626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.209 [2024-11-25 13:05:20.874875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.209 [2024-11-25 13:05:20.875099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.209 [2024-11-25 13:05:20.875108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.209 [2024-11-25 13:05:20.875115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.209 [2024-11-25 13:05:20.875123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.209 [2024-11-25 13:05:20.887877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.209 [2024-11-25 13:05:20.888514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.209 [2024-11-25 13:05:20.888552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.209 [2024-11-25 13:05:20.888563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.209 [2024-11-25 13:05:20.888802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.209 [2024-11-25 13:05:20.889035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.209 [2024-11-25 13:05:20.889046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.209 [2024-11-25 13:05:20.889053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.209 [2024-11-25 13:05:20.889061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.209 [2024-11-25 13:05:20.901838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.209 [2024-11-25 13:05:20.902417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.209 [2024-11-25 13:05:20.902438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.209 [2024-11-25 13:05:20.902446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.209 [2024-11-25 13:05:20.902665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.210 [2024-11-25 13:05:20.902890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.210 [2024-11-25 13:05:20.902899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.210 [2024-11-25 13:05:20.902906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.210 [2024-11-25 13:05:20.902913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.210 [2024-11-25 13:05:20.915670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.210 [2024-11-25 13:05:20.916172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.210 [2024-11-25 13:05:20.916194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.210 [2024-11-25 13:05:20.916201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.210 [2024-11-25 13:05:20.916420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.210 [2024-11-25 13:05:20.916638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.210 [2024-11-25 13:05:20.916647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.210 [2024-11-25 13:05:20.916654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.210 [2024-11-25 13:05:20.916660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.210 [2024-11-25 13:05:20.929642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.210 [2024-11-25 13:05:20.930185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.210 [2024-11-25 13:05:20.930201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.210 [2024-11-25 13:05:20.930209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.210 [2024-11-25 13:05:20.930427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.210 [2024-11-25 13:05:20.930645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.210 [2024-11-25 13:05:20.930655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.210 [2024-11-25 13:05:20.930664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.210 [2024-11-25 13:05:20.930671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.210 [2024-11-25 13:05:20.943437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.210 [2024-11-25 13:05:20.943988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.210 [2024-11-25 13:05:20.944005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.210 [2024-11-25 13:05:20.944013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.210 [2024-11-25 13:05:20.944232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.210 [2024-11-25 13:05:20.944450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.210 [2024-11-25 13:05:20.944459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.210 [2024-11-25 13:05:20.944466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.210 [2024-11-25 13:05:20.944473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.210 [2024-11-25 13:05:20.957241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.210 [2024-11-25 13:05:20.957772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.210 [2024-11-25 13:05:20.957788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.210 [2024-11-25 13:05:20.957795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.210 [2024-11-25 13:05:20.958024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.210 [2024-11-25 13:05:20.958243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.210 [2024-11-25 13:05:20.958252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.210 [2024-11-25 13:05:20.958259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.210 [2024-11-25 13:05:20.958265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.210 [2024-11-25 13:05:20.971027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.210 [2024-11-25 13:05:20.971360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.210 [2024-11-25 13:05:20.971378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.210 [2024-11-25 13:05:20.971385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.210 [2024-11-25 13:05:20.971604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.210 [2024-11-25 13:05:20.971824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.210 [2024-11-25 13:05:20.971832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.210 [2024-11-25 13:05:20.971839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.210 [2024-11-25 13:05:20.971846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.210 [2024-11-25 13:05:20.984825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.210 [2024-11-25 13:05:20.985399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.210 [2024-11-25 13:05:20.985416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.210 [2024-11-25 13:05:20.985424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.210 [2024-11-25 13:05:20.985642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.210 [2024-11-25 13:05:20.985860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.210 [2024-11-25 13:05:20.985878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.210 [2024-11-25 13:05:20.985885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.210 [2024-11-25 13:05:20.985892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.210 7343.75 IOPS, 28.69 MiB/s [2024-11-25T12:05:21.113Z] [2024-11-25 13:05:21.000402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.210 [2024-11-25 13:05:21.001092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.210 [2024-11-25 13:05:21.001130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.210 [2024-11-25 13:05:21.001141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.210 [2024-11-25 13:05:21.001379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.210 [2024-11-25 13:05:21.001602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.210 [2024-11-25 13:05:21.001615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.210 [2024-11-25 13:05:21.001623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.210 [2024-11-25 13:05:21.001632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.210 [2024-11-25 13:05:21.014398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.210 [2024-11-25 13:05:21.014987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.210 [2024-11-25 13:05:21.015026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.210 [2024-11-25 13:05:21.015037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.210 [2024-11-25 13:05:21.015276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.210 [2024-11-25 13:05:21.015499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.210 [2024-11-25 13:05:21.015508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.210 [2024-11-25 13:05:21.015516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.210 [2024-11-25 13:05:21.015524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.210 [2024-11-25 13:05:21.028302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.210 [2024-11-25 13:05:21.028937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.210 [2024-11-25 13:05:21.028975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.210 [2024-11-25 13:05:21.028986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.210 [2024-11-25 13:05:21.029224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.210 [2024-11-25 13:05:21.029447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.210 [2024-11-25 13:05:21.029456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.210 [2024-11-25 13:05:21.029464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.210 [2024-11-25 13:05:21.029472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.210 [2024-11-25 13:05:21.042238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.210 [2024-11-25 13:05:21.042790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.210 [2024-11-25 13:05:21.042828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.211 [2024-11-25 13:05:21.042841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.211 [2024-11-25 13:05:21.043090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.211 [2024-11-25 13:05:21.043314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.211 [2024-11-25 13:05:21.043324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.211 [2024-11-25 13:05:21.043331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.211 [2024-11-25 13:05:21.043344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.211 [2024-11-25 13:05:21.056106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.211 [2024-11-25 13:05:21.056623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.211 [2024-11-25 13:05:21.056660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.211 [2024-11-25 13:05:21.056673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.211 [2024-11-25 13:05:21.056922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.211 [2024-11-25 13:05:21.057147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.211 [2024-11-25 13:05:21.057156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.211 [2024-11-25 13:05:21.057164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.211 [2024-11-25 13:05:21.057172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.211 [2024-11-25 13:05:21.069937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.211 [2024-11-25 13:05:21.070483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.211 [2024-11-25 13:05:21.070503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.211 [2024-11-25 13:05:21.070511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.211 [2024-11-25 13:05:21.070732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.211 [2024-11-25 13:05:21.070957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.211 [2024-11-25 13:05:21.070966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.211 [2024-11-25 13:05:21.070973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.211 [2024-11-25 13:05:21.070982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.211 [2024-11-25 13:05:21.083735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.211 [2024-11-25 13:05:21.084272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.211 [2024-11-25 13:05:21.084289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.211 [2024-11-25 13:05:21.084297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.211 [2024-11-25 13:05:21.084515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.211 [2024-11-25 13:05:21.084733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.211 [2024-11-25 13:05:21.084742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.211 [2024-11-25 13:05:21.084749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.211 [2024-11-25 13:05:21.084756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.211 [2024-11-25 13:05:21.097900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.211 [2024-11-25 13:05:21.098439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.211 [2024-11-25 13:05:21.098461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.211 [2024-11-25 13:05:21.098468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.211 [2024-11-25 13:05:21.098688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.211 [2024-11-25 13:05:21.098924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.211 [2024-11-25 13:05:21.098934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.211 [2024-11-25 13:05:21.098941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.211 [2024-11-25 13:05:21.098948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.474 [2024-11-25 13:05:21.111708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.474 [2024-11-25 13:05:21.112395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.474 [2024-11-25 13:05:21.112433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.474 [2024-11-25 13:05:21.112445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.474 [2024-11-25 13:05:21.112683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.474 [2024-11-25 13:05:21.112914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.474 [2024-11-25 13:05:21.112924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.474 [2024-11-25 13:05:21.112932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.474 [2024-11-25 13:05:21.112941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.474 [2024-11-25 13:05:21.125708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.474 [2024-11-25 13:05:21.126214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.474 [2024-11-25 13:05:21.126252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.474 [2024-11-25 13:05:21.126265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.474 [2024-11-25 13:05:21.126507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.474 [2024-11-25 13:05:21.126730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.474 [2024-11-25 13:05:21.126740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.474 [2024-11-25 13:05:21.126748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.474 [2024-11-25 13:05:21.126757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.474 [2024-11-25 13:05:21.139516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.474 [2024-11-25 13:05:21.139975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.474 [2024-11-25 13:05:21.140013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.474 [2024-11-25 13:05:21.140026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.474 [2024-11-25 13:05:21.140273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.474 [2024-11-25 13:05:21.140496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.474 [2024-11-25 13:05:21.140505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.474 [2024-11-25 13:05:21.140513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.474 [2024-11-25 13:05:21.140521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.474 [2024-11-25 13:05:21.153496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.474 [2024-11-25 13:05:21.154147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.474 [2024-11-25 13:05:21.154185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.474 [2024-11-25 13:05:21.154196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.474 [2024-11-25 13:05:21.154434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.474 [2024-11-25 13:05:21.154657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.474 [2024-11-25 13:05:21.154666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.474 [2024-11-25 13:05:21.154674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.474 [2024-11-25 13:05:21.154682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.474 [2024-11-25 13:05:21.167439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.474 [2024-11-25 13:05:21.167981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.474 [2024-11-25 13:05:21.168020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.474 [2024-11-25 13:05:21.168032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.474 [2024-11-25 13:05:21.168274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.474 [2024-11-25 13:05:21.168497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.474 [2024-11-25 13:05:21.168506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.474 [2024-11-25 13:05:21.168514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.474 [2024-11-25 13:05:21.168522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.474 [2024-11-25 13:05:21.181331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.474 [2024-11-25 13:05:21.181890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.474 [2024-11-25 13:05:21.181910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.474 [2024-11-25 13:05:21.181918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.474 [2024-11-25 13:05:21.182137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.474 [2024-11-25 13:05:21.182356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.474 [2024-11-25 13:05:21.182369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.474 [2024-11-25 13:05:21.182376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.474 [2024-11-25 13:05:21.182383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.474 [2024-11-25 13:05:21.195194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.474 [2024-11-25 13:05:21.195756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.475 [2024-11-25 13:05:21.195774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.475 [2024-11-25 13:05:21.195781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.475 [2024-11-25 13:05:21.196010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.475 [2024-11-25 13:05:21.196231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.475 [2024-11-25 13:05:21.196240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.475 [2024-11-25 13:05:21.196248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.475 [2024-11-25 13:05:21.196255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.475 [2024-11-25 13:05:21.209040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.475 [2024-11-25 13:05:21.209576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.475 [2024-11-25 13:05:21.209593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.475 [2024-11-25 13:05:21.209600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.475 [2024-11-25 13:05:21.209819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.475 [2024-11-25 13:05:21.210044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.475 [2024-11-25 13:05:21.210053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.475 [2024-11-25 13:05:21.210061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.475 [2024-11-25 13:05:21.210068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.475 [2024-11-25 13:05:21.222827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.475 [2024-11-25 13:05:21.223491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.475 [2024-11-25 13:05:21.223529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.475 [2024-11-25 13:05:21.223540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.475 [2024-11-25 13:05:21.223778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.475 [2024-11-25 13:05:21.224010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.475 [2024-11-25 13:05:21.224020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.475 [2024-11-25 13:05:21.224028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.475 [2024-11-25 13:05:21.224040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.475 [2024-11-25 13:05:21.236818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.475 [2024-11-25 13:05:21.237369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.475 [2024-11-25 13:05:21.237390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.475 [2024-11-25 13:05:21.237398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.475 [2024-11-25 13:05:21.237618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.475 [2024-11-25 13:05:21.237837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.475 [2024-11-25 13:05:21.237845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.475 [2024-11-25 13:05:21.237852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.475 [2024-11-25 13:05:21.237859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.475 [2024-11-25 13:05:21.250640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.475 [2024-11-25 13:05:21.251183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.475 [2024-11-25 13:05:21.251200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.475 [2024-11-25 13:05:21.251208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.475 [2024-11-25 13:05:21.251427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.475 [2024-11-25 13:05:21.251645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.475 [2024-11-25 13:05:21.251653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.475 [2024-11-25 13:05:21.251660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.475 [2024-11-25 13:05:21.251667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.475 [2024-11-25 13:05:21.264434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.475 [2024-11-25 13:05:21.264968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.475 [2024-11-25 13:05:21.264985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.475 [2024-11-25 13:05:21.264992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.475 [2024-11-25 13:05:21.265211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.475 [2024-11-25 13:05:21.265429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.475 [2024-11-25 13:05:21.265437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.475 [2024-11-25 13:05:21.265445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.475 [2024-11-25 13:05:21.265452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.475 [2024-11-25 13:05:21.278428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.475 [2024-11-25 13:05:21.278989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.475 [2024-11-25 13:05:21.279010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.475 [2024-11-25 13:05:21.279018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.475 [2024-11-25 13:05:21.279237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.475 [2024-11-25 13:05:21.279455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.475 [2024-11-25 13:05:21.279463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.475 [2024-11-25 13:05:21.279470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.475 [2024-11-25 13:05:21.279477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.475 [2024-11-25 13:05:21.292243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.475 [2024-11-25 13:05:21.292769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.475 [2024-11-25 13:05:21.292786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.475 [2024-11-25 13:05:21.292793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.475 [2024-11-25 13:05:21.293017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.475 [2024-11-25 13:05:21.293236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.475 [2024-11-25 13:05:21.293244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.475 [2024-11-25 13:05:21.293251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.475 [2024-11-25 13:05:21.293257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.475 [2024-11-25 13:05:21.306033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.475 [2024-11-25 13:05:21.306610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.475 [2024-11-25 13:05:21.306626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.475 [2024-11-25 13:05:21.306634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.475 [2024-11-25 13:05:21.306852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.475 [2024-11-25 13:05:21.307077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.475 [2024-11-25 13:05:21.307086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.475 [2024-11-25 13:05:21.307093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.475 [2024-11-25 13:05:21.307100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.475 [2024-11-25 13:05:21.319860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.475 [2024-11-25 13:05:21.320483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.475 [2024-11-25 13:05:21.320521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.475 [2024-11-25 13:05:21.320533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.475 [2024-11-25 13:05:21.320777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.475 [2024-11-25 13:05:21.321010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.475 [2024-11-25 13:05:21.321020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.475 [2024-11-25 13:05:21.321028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.475 [2024-11-25 13:05:21.321036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.476 [2024-11-25 13:05:21.333808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.476 [2024-11-25 13:05:21.334486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.476 [2024-11-25 13:05:21.334525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.476 [2024-11-25 13:05:21.334536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.476 [2024-11-25 13:05:21.334775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.476 [2024-11-25 13:05:21.335008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.476 [2024-11-25 13:05:21.335018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.476 [2024-11-25 13:05:21.335025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.476 [2024-11-25 13:05:21.335034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.476 [2024-11-25 13:05:21.347809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.476 [2024-11-25 13:05:21.348493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.476 [2024-11-25 13:05:21.348531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.476 [2024-11-25 13:05:21.348543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.476 [2024-11-25 13:05:21.348783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.476 [2024-11-25 13:05:21.349016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.476 [2024-11-25 13:05:21.349026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.476 [2024-11-25 13:05:21.349034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.476 [2024-11-25 13:05:21.349042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.476 [2024-11-25 13:05:21.361608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.476 [2024-11-25 13:05:21.362270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.476 [2024-11-25 13:05:21.362308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.476 [2024-11-25 13:05:21.362319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.476 [2024-11-25 13:05:21.362557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.476 [2024-11-25 13:05:21.362781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.476 [2024-11-25 13:05:21.362794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.476 [2024-11-25 13:05:21.362802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.476 [2024-11-25 13:05:21.362810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.476 [2024-11-25 13:05:21.375463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.738 [2024-11-25 13:05:21.376161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.738 [2024-11-25 13:05:21.376200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.738 [2024-11-25 13:05:21.376211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.738 [2024-11-25 13:05:21.376449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.738 [2024-11-25 13:05:21.376672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.738 [2024-11-25 13:05:21.376681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.738 [2024-11-25 13:05:21.376689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.738 [2024-11-25 13:05:21.376697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.738 [2024-11-25 13:05:21.389458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.738 [2024-11-25 13:05:21.390106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.738 [2024-11-25 13:05:21.390144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.738 [2024-11-25 13:05:21.390156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.738 [2024-11-25 13:05:21.390396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.738 [2024-11-25 13:05:21.390619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.738 [2024-11-25 13:05:21.390628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.738 [2024-11-25 13:05:21.390636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.738 [2024-11-25 13:05:21.390645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.738 [2024-11-25 13:05:21.403428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.738 [2024-11-25 13:05:21.403992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.738 [2024-11-25 13:05:21.404030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.738 [2024-11-25 13:05:21.404041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.738 [2024-11-25 13:05:21.404280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.738 [2024-11-25 13:05:21.404503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.738 [2024-11-25 13:05:21.404512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.738 [2024-11-25 13:05:21.404519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.738 [2024-11-25 13:05:21.404527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.738 [2024-11-25 13:05:21.417290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.738 [2024-11-25 13:05:21.417977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.738 [2024-11-25 13:05:21.418015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.738 [2024-11-25 13:05:21.418027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.738 [2024-11-25 13:05:21.418267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.738 [2024-11-25 13:05:21.418491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.738 [2024-11-25 13:05:21.418500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.738 [2024-11-25 13:05:21.418508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.738 [2024-11-25 13:05:21.418516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.738 [2024-11-25 13:05:21.431098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.738 [2024-11-25 13:05:21.431767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.738 [2024-11-25 13:05:21.431805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.738 [2024-11-25 13:05:21.431818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.738 [2024-11-25 13:05:21.432070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.738 [2024-11-25 13:05:21.432294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.738 [2024-11-25 13:05:21.432303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.738 [2024-11-25 13:05:21.432310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.738 [2024-11-25 13:05:21.432318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.738 [2024-11-25 13:05:21.445089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.738 [2024-11-25 13:05:21.445646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.738 [2024-11-25 13:05:21.445665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.738 [2024-11-25 13:05:21.445673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.738 [2024-11-25 13:05:21.445900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.738 [2024-11-25 13:05:21.446120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.738 [2024-11-25 13:05:21.446128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.738 [2024-11-25 13:05:21.446135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.738 [2024-11-25 13:05:21.446142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.738 [2024-11-25 13:05:21.458919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.738 [2024-11-25 13:05:21.459540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.738 [2024-11-25 13:05:21.459583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.738 [2024-11-25 13:05:21.459595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.738 [2024-11-25 13:05:21.459834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.738 [2024-11-25 13:05:21.460068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.738 [2024-11-25 13:05:21.460080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.738 [2024-11-25 13:05:21.460088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.738 [2024-11-25 13:05:21.460097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.738 [2024-11-25 13:05:21.472866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.738 [2024-11-25 13:05:21.473412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.738 [2024-11-25 13:05:21.473431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.738 [2024-11-25 13:05:21.473439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.738 [2024-11-25 13:05:21.473659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.738 [2024-11-25 13:05:21.473885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.738 [2024-11-25 13:05:21.473893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.739 [2024-11-25 13:05:21.473900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.739 [2024-11-25 13:05:21.473907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.739 [2024-11-25 13:05:21.486663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.739 [2024-11-25 13:05:21.487298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.739 [2024-11-25 13:05:21.487336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.739 [2024-11-25 13:05:21.487347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.739 [2024-11-25 13:05:21.487586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.739 [2024-11-25 13:05:21.487809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.739 [2024-11-25 13:05:21.487818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.739 [2024-11-25 13:05:21.487826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.739 [2024-11-25 13:05:21.487834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.739 [2024-11-25 13:05:21.500629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.739 [2024-11-25 13:05:21.501162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.739 [2024-11-25 13:05:21.501182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.739 [2024-11-25 13:05:21.501190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.739 [2024-11-25 13:05:21.501419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.739 [2024-11-25 13:05:21.501639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.739 [2024-11-25 13:05:21.501647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.739 [2024-11-25 13:05:21.501654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.739 [2024-11-25 13:05:21.501661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.739 [2024-11-25 13:05:21.514434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.739 [2024-11-25 13:05:21.514993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.739 [2024-11-25 13:05:21.515032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.739 [2024-11-25 13:05:21.515044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.739 [2024-11-25 13:05:21.515286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.739 [2024-11-25 13:05:21.515509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.739 [2024-11-25 13:05:21.515519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.739 [2024-11-25 13:05:21.515527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.739 [2024-11-25 13:05:21.515535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.739 [2024-11-25 13:05:21.528301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.739 [2024-11-25 13:05:21.528889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.739 [2024-11-25 13:05:21.528909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.739 [2024-11-25 13:05:21.528917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.739 [2024-11-25 13:05:21.529137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.739 [2024-11-25 13:05:21.529355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.739 [2024-11-25 13:05:21.529363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.739 [2024-11-25 13:05:21.529371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.739 [2024-11-25 13:05:21.529377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.739 [2024-11-25 13:05:21.542135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.739 [2024-11-25 13:05:21.542674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.739 [2024-11-25 13:05:21.542691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.739 [2024-11-25 13:05:21.542698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.739 [2024-11-25 13:05:21.542925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.739 [2024-11-25 13:05:21.543144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.739 [2024-11-25 13:05:21.543158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.739 [2024-11-25 13:05:21.543165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.739 [2024-11-25 13:05:21.543171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.739 [2024-11-25 13:05:21.555935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.739 [2024-11-25 13:05:21.556578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.739 [2024-11-25 13:05:21.556616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.739 [2024-11-25 13:05:21.556627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.739 [2024-11-25 13:05:21.556876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.739 [2024-11-25 13:05:21.557100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.739 [2024-11-25 13:05:21.557109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.739 [2024-11-25 13:05:21.557117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.739 [2024-11-25 13:05:21.557125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.739 [2024-11-25 13:05:21.569900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.739 [2024-11-25 13:05:21.570448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.739 [2024-11-25 13:05:21.570467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.739 [2024-11-25 13:05:21.570475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.739 [2024-11-25 13:05:21.570695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.739 [2024-11-25 13:05:21.570920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.739 [2024-11-25 13:05:21.570929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.739 [2024-11-25 13:05:21.570936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.739 [2024-11-25 13:05:21.570943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.739 [2024-11-25 13:05:21.583705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.739 [2024-11-25 13:05:21.584242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.739 [2024-11-25 13:05:21.584259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.739 [2024-11-25 13:05:21.584267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.739 [2024-11-25 13:05:21.584485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.739 [2024-11-25 13:05:21.584705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.739 [2024-11-25 13:05:21.584713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.739 [2024-11-25 13:05:21.584720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.739 [2024-11-25 13:05:21.584727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.739 [2024-11-25 13:05:21.597502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.739 [2024-11-25 13:05:21.598047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.739 [2024-11-25 13:05:21.598065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.739 [2024-11-25 13:05:21.598073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.739 [2024-11-25 13:05:21.598291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.739 [2024-11-25 13:05:21.598510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.739 [2024-11-25 13:05:21.598518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.739 [2024-11-25 13:05:21.598525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.739 [2024-11-25 13:05:21.598531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.739 [2024-11-25 13:05:21.611310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.739 [2024-11-25 13:05:21.611830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.739 [2024-11-25 13:05:21.611846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.739 [2024-11-25 13:05:21.611854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.740 [2024-11-25 13:05:21.612078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.740 [2024-11-25 13:05:21.612296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.740 [2024-11-25 13:05:21.612304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.740 [2024-11-25 13:05:21.612311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.740 [2024-11-25 13:05:21.612317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.740 [2024-11-25 13:05:21.625298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:41.740 [2024-11-25 13:05:21.625873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.740 [2024-11-25 13:05:21.625890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:41.740 [2024-11-25 13:05:21.625898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:41.740 [2024-11-25 13:05:21.626117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:41.740 [2024-11-25 13:05:21.626336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:41.740 [2024-11-25 13:05:21.626343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:41.740 [2024-11-25 13:05:21.626350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:41.740 [2024-11-25 13:05:21.626357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:41.740 [2024-11-25 13:05:21.639119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.002 [2024-11-25 13:05:21.639655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.002 [2024-11-25 13:05:21.639677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.002 [2024-11-25 13:05:21.639684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.002 [2024-11-25 13:05:21.639909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.002 [2024-11-25 13:05:21.640129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.002 [2024-11-25 13:05:21.640137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.002 [2024-11-25 13:05:21.640144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.002 [2024-11-25 13:05:21.640151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.002 [2024-11-25 13:05:21.652912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.002 [2024-11-25 13:05:21.653340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.002 [2024-11-25 13:05:21.653356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.002 [2024-11-25 13:05:21.653363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.002 [2024-11-25 13:05:21.653582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.002 [2024-11-25 13:05:21.653800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.002 [2024-11-25 13:05:21.653807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.002 [2024-11-25 13:05:21.653814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.002 [2024-11-25 13:05:21.653821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.002 [2024-11-25 13:05:21.666782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.002 [2024-11-25 13:05:21.667316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.002 [2024-11-25 13:05:21.667333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.002 [2024-11-25 13:05:21.667340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.002 [2024-11-25 13:05:21.667558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.002 [2024-11-25 13:05:21.667776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.002 [2024-11-25 13:05:21.667789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.002 [2024-11-25 13:05:21.667796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.002 [2024-11-25 13:05:21.667803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.002 [2024-11-25 13:05:21.680761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.002 [2024-11-25 13:05:21.681316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.002 [2024-11-25 13:05:21.681355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.002 [2024-11-25 13:05:21.681367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.002 [2024-11-25 13:05:21.681611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.002 [2024-11-25 13:05:21.681834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.003 [2024-11-25 13:05:21.681843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.003 [2024-11-25 13:05:21.681851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.003 [2024-11-25 13:05:21.681859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.003 [2024-11-25 13:05:21.694705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.003 [2024-11-25 13:05:21.695258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.003 [2024-11-25 13:05:21.695295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.003 [2024-11-25 13:05:21.695307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.003 [2024-11-25 13:05:21.695547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.003 [2024-11-25 13:05:21.695773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.003 [2024-11-25 13:05:21.695782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.003 [2024-11-25 13:05:21.695791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.003 [2024-11-25 13:05:21.695799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.003 [2024-11-25 13:05:21.708590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.003 [2024-11-25 13:05:21.709226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.003 [2024-11-25 13:05:21.709264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.003 [2024-11-25 13:05:21.709275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.003 [2024-11-25 13:05:21.709514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.003 [2024-11-25 13:05:21.709737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.003 [2024-11-25 13:05:21.709746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.003 [2024-11-25 13:05:21.709754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.003 [2024-11-25 13:05:21.709762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.003 [2024-11-25 13:05:21.722543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.003 [2024-11-25 13:05:21.723109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.003 [2024-11-25 13:05:21.723129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.003 [2024-11-25 13:05:21.723137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.003 [2024-11-25 13:05:21.723356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.003 [2024-11-25 13:05:21.723574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.003 [2024-11-25 13:05:21.723587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.003 [2024-11-25 13:05:21.723594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.003 [2024-11-25 13:05:21.723601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.003 [2024-11-25 13:05:21.736379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.003 [2024-11-25 13:05:21.736975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.003 [2024-11-25 13:05:21.737013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.003 [2024-11-25 13:05:21.737025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.003 [2024-11-25 13:05:21.737268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.003 [2024-11-25 13:05:21.737490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.003 [2024-11-25 13:05:21.737508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.003 [2024-11-25 13:05:21.737516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.003 [2024-11-25 13:05:21.737524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.003 [2024-11-25 13:05:21.750366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.003 [2024-11-25 13:05:21.750958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.003 [2024-11-25 13:05:21.750978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.003 [2024-11-25 13:05:21.750986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.003 [2024-11-25 13:05:21.751206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.003 [2024-11-25 13:05:21.751425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.003 [2024-11-25 13:05:21.751432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.003 [2024-11-25 13:05:21.751439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.003 [2024-11-25 13:05:21.751446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.003 [2024-11-25 13:05:21.764195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.003 [2024-11-25 13:05:21.764733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.003 [2024-11-25 13:05:21.764750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.003 [2024-11-25 13:05:21.764757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.003 [2024-11-25 13:05:21.764980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.003 [2024-11-25 13:05:21.765200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.003 [2024-11-25 13:05:21.765208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.003 [2024-11-25 13:05:21.765215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.003 [2024-11-25 13:05:21.765222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.003 [2024-11-25 13:05:21.778184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.003 [2024-11-25 13:05:21.778799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.003 [2024-11-25 13:05:21.778836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.003 [2024-11-25 13:05:21.778848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.003 [2024-11-25 13:05:21.779095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.003 [2024-11-25 13:05:21.779319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.003 [2024-11-25 13:05:21.779328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.003 [2024-11-25 13:05:21.779336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.003 [2024-11-25 13:05:21.779344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.003 [2024-11-25 13:05:21.792101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.003 [2024-11-25 13:05:21.792781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.003 [2024-11-25 13:05:21.792819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.003 [2024-11-25 13:05:21.792832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.003 [2024-11-25 13:05:21.793083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.003 [2024-11-25 13:05:21.793306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.003 [2024-11-25 13:05:21.793315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.003 [2024-11-25 13:05:21.793323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.003 [2024-11-25 13:05:21.793331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.003 [2024-11-25 13:05:21.806085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.003 [2024-11-25 13:05:21.806632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.003 [2024-11-25 13:05:21.806652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.003 [2024-11-25 13:05:21.806660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.003 [2024-11-25 13:05:21.806885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.003 [2024-11-25 13:05:21.807105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.003 [2024-11-25 13:05:21.807113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.003 [2024-11-25 13:05:21.807120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.003 [2024-11-25 13:05:21.807127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.003 [2024-11-25 13:05:21.819876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.003 [2024-11-25 13:05:21.820541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.003 [2024-11-25 13:05:21.820583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.003 [2024-11-25 13:05:21.820594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.004 [2024-11-25 13:05:21.820832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.004 [2024-11-25 13:05:21.821065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.004 [2024-11-25 13:05:21.821075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.004 [2024-11-25 13:05:21.821083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.004 [2024-11-25 13:05:21.821091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.004 [2024-11-25 13:05:21.833842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.004 [2024-11-25 13:05:21.834468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.004 [2024-11-25 13:05:21.834506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.004 [2024-11-25 13:05:21.834517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.004 [2024-11-25 13:05:21.834755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.004 [2024-11-25 13:05:21.834987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.004 [2024-11-25 13:05:21.834997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.004 [2024-11-25 13:05:21.835005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.004 [2024-11-25 13:05:21.835013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.004 [2024-11-25 13:05:21.847766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.004 [2024-11-25 13:05:21.848401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.004 [2024-11-25 13:05:21.848439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.004 [2024-11-25 13:05:21.848449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.004 [2024-11-25 13:05:21.848688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.004 [2024-11-25 13:05:21.848920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.004 [2024-11-25 13:05:21.848930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.004 [2024-11-25 13:05:21.848938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.004 [2024-11-25 13:05:21.848946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.004 [2024-11-25 13:05:21.861691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.004 [2024-11-25 13:05:21.862242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.004 [2024-11-25 13:05:21.862262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.004 [2024-11-25 13:05:21.862270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.004 [2024-11-25 13:05:21.862493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.004 [2024-11-25 13:05:21.862712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.004 [2024-11-25 13:05:21.862720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.004 [2024-11-25 13:05:21.862727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.004 [2024-11-25 13:05:21.862734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.004 [2024-11-25 13:05:21.875682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.004 [2024-11-25 13:05:21.876227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.004 [2024-11-25 13:05:21.876245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.004 [2024-11-25 13:05:21.876252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.004 [2024-11-25 13:05:21.876471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.004 [2024-11-25 13:05:21.876689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.004 [2024-11-25 13:05:21.876698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.004 [2024-11-25 13:05:21.876705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.004 [2024-11-25 13:05:21.876711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.004 [2024-11-25 13:05:21.889657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.004 [2024-11-25 13:05:21.890213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.004 [2024-11-25 13:05:21.890229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.004 [2024-11-25 13:05:21.890237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.004 [2024-11-25 13:05:21.890455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.004 [2024-11-25 13:05:21.890674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.004 [2024-11-25 13:05:21.890682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.004 [2024-11-25 13:05:21.890689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.004 [2024-11-25 13:05:21.890695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.267 [2024-11-25 13:05:21.903650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.267 [2024-11-25 13:05:21.904066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.267 [2024-11-25 13:05:21.904083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.267 [2024-11-25 13:05:21.904090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.267 [2024-11-25 13:05:21.904308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.267 [2024-11-25 13:05:21.904526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.267 [2024-11-25 13:05:21.904538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.267 [2024-11-25 13:05:21.904546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.267 [2024-11-25 13:05:21.904552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.267 [2024-11-25 13:05:21.917499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.267 [2024-11-25 13:05:21.918162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.267 [2024-11-25 13:05:21.918200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.267 [2024-11-25 13:05:21.918211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.267 [2024-11-25 13:05:21.918450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.267 [2024-11-25 13:05:21.918672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.267 [2024-11-25 13:05:21.918681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.267 [2024-11-25 13:05:21.918689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.267 [2024-11-25 13:05:21.918697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.267 [2024-11-25 13:05:21.931468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.267 [2024-11-25 13:05:21.931784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.267 [2024-11-25 13:05:21.931806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.267 [2024-11-25 13:05:21.931814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.267 [2024-11-25 13:05:21.932043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.267 [2024-11-25 13:05:21.932263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.267 [2024-11-25 13:05:21.932272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.267 [2024-11-25 13:05:21.932279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.267 [2024-11-25 13:05:21.932286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.267 [2024-11-25 13:05:21.945450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.267 [2024-11-25 13:05:21.946130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.267 [2024-11-25 13:05:21.946167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.267 [2024-11-25 13:05:21.946178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.267 [2024-11-25 13:05:21.946416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.267 [2024-11-25 13:05:21.946639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.267 [2024-11-25 13:05:21.946649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.267 [2024-11-25 13:05:21.946658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.267 [2024-11-25 13:05:21.946667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.267 [2024-11-25 13:05:21.959439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.267 [2024-11-25 13:05:21.960009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.267 [2024-11-25 13:05:21.960047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.267 [2024-11-25 13:05:21.960061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.267 [2024-11-25 13:05:21.960304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.267 [2024-11-25 13:05:21.960526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.267 [2024-11-25 13:05:21.960535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.267 [2024-11-25 13:05:21.960543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.267 [2024-11-25 13:05:21.960551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.267 [2024-11-25 13:05:21.973311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.267 [2024-11-25 13:05:21.973969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.267 [2024-11-25 13:05:21.974006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.267 [2024-11-25 13:05:21.974017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.267 [2024-11-25 13:05:21.974256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.267 [2024-11-25 13:05:21.974479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.267 [2024-11-25 13:05:21.974487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.267 [2024-11-25 13:05:21.974495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.267 [2024-11-25 13:05:21.974503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.267 [2024-11-25 13:05:21.987255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.267 [2024-11-25 13:05:21.987843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.267 [2024-11-25 13:05:21.987867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.267 [2024-11-25 13:05:21.987876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.267 [2024-11-25 13:05:21.988095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.267 [2024-11-25 13:05:21.988314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.267 [2024-11-25 13:05:21.988322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.267 [2024-11-25 13:05:21.988329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.267 [2024-11-25 13:05:21.988336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.267 5875.00 IOPS, 22.95 MiB/s [2024-11-25T12:05:22.170Z] [2024-11-25 13:05:22.002742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.267 [2024-11-25 13:05:22.003211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.267 [2024-11-25 13:05:22.003253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.267 [2024-11-25 13:05:22.003266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.267 [2024-11-25 13:05:22.003505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.267 [2024-11-25 13:05:22.003728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.267 [2024-11-25 13:05:22.003737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.267 [2024-11-25 13:05:22.003745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.268 [2024-11-25 13:05:22.003753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.268 [2024-11-25 13:05:22.016729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.268 [2024-11-25 13:05:22.017368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.268 [2024-11-25 13:05:22.017406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.268 [2024-11-25 13:05:22.017417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.268 [2024-11-25 13:05:22.017655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.268 [2024-11-25 13:05:22.017890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.268 [2024-11-25 13:05:22.017907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.268 [2024-11-25 13:05:22.017915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.268 [2024-11-25 13:05:22.017923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.268 [2024-11-25 13:05:22.030685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.268 [2024-11-25 13:05:22.031347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.268 [2024-11-25 13:05:22.031385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.268 [2024-11-25 13:05:22.031396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.268 [2024-11-25 13:05:22.031635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.268 [2024-11-25 13:05:22.031858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.268 [2024-11-25 13:05:22.031876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.268 [2024-11-25 13:05:22.031884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.268 [2024-11-25 13:05:22.031893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.268 [2024-11-25 13:05:22.044645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.268 [2024-11-25 13:05:22.045281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.268 [2024-11-25 13:05:22.045319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.268 [2024-11-25 13:05:22.045330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.268 [2024-11-25 13:05:22.045573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.268 [2024-11-25 13:05:22.045796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.268 [2024-11-25 13:05:22.045805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.268 [2024-11-25 13:05:22.045813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.268 [2024-11-25 13:05:22.045821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.268 [2024-11-25 13:05:22.058572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.268 [2024-11-25 13:05:22.059236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.268 [2024-11-25 13:05:22.059274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.268 [2024-11-25 13:05:22.059285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.268 [2024-11-25 13:05:22.059524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.268 [2024-11-25 13:05:22.059747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.268 [2024-11-25 13:05:22.059755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.268 [2024-11-25 13:05:22.059763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.268 [2024-11-25 13:05:22.059771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.268 [2024-11-25 13:05:22.072530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.268 [2024-11-25 13:05:22.073169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.268 [2024-11-25 13:05:22.073207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.268 [2024-11-25 13:05:22.073218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.268 [2024-11-25 13:05:22.073456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.268 [2024-11-25 13:05:22.073679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.268 [2024-11-25 13:05:22.073688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.268 [2024-11-25 13:05:22.073696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.268 [2024-11-25 13:05:22.073704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.268 [2024-11-25 13:05:22.086457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.268 [2024-11-25 13:05:22.087155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.268 [2024-11-25 13:05:22.087193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.268 [2024-11-25 13:05:22.087204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.268 [2024-11-25 13:05:22.087443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.268 [2024-11-25 13:05:22.087665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.268 [2024-11-25 13:05:22.087679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.268 [2024-11-25 13:05:22.087687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.268 [2024-11-25 13:05:22.087696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.268 [2024-11-25 13:05:22.100439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.268 [2024-11-25 13:05:22.101027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.268 [2024-11-25 13:05:22.101047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.268 [2024-11-25 13:05:22.101055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.268 [2024-11-25 13:05:22.101276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.268 [2024-11-25 13:05:22.101494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.268 [2024-11-25 13:05:22.101503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.268 [2024-11-25 13:05:22.101511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.268 [2024-11-25 13:05:22.101518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.268 [2024-11-25 13:05:22.114278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.268 [2024-11-25 13:05:22.114851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.268 [2024-11-25 13:05:22.114873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.268 [2024-11-25 13:05:22.114881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.268 [2024-11-25 13:05:22.115099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.268 [2024-11-25 13:05:22.115318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.268 [2024-11-25 13:05:22.115326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.268 [2024-11-25 13:05:22.115333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.268 [2024-11-25 13:05:22.115340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.268 [2024-11-25 13:05:22.128097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.268 [2024-11-25 13:05:22.128762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.268 [2024-11-25 13:05:22.128800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.268 [2024-11-25 13:05:22.128811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.268 [2024-11-25 13:05:22.129057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.268 [2024-11-25 13:05:22.129281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.268 [2024-11-25 13:05:22.129290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.268 [2024-11-25 13:05:22.129297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.268 [2024-11-25 13:05:22.129310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.268 [2024-11-25 13:05:22.142060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.268 [2024-11-25 13:05:22.142731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.268 [2024-11-25 13:05:22.142768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.268 [2024-11-25 13:05:22.142779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.268 [2024-11-25 13:05:22.143026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.268 [2024-11-25 13:05:22.143250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.268 [2024-11-25 13:05:22.143259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.269 [2024-11-25 13:05:22.143267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.269 [2024-11-25 13:05:22.143275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.269 [2024-11-25 13:05:22.156027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.269 [2024-11-25 13:05:22.156701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.269 [2024-11-25 13:05:22.156738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.269 [2024-11-25 13:05:22.156749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.269 [2024-11-25 13:05:22.156996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.269 [2024-11-25 13:05:22.157220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.269 [2024-11-25 13:05:22.157229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.269 [2024-11-25 13:05:22.157237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.269 [2024-11-25 13:05:22.157245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.531 [2024-11-25 13:05:22.170006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.531 [2024-11-25 13:05:22.170680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.531 [2024-11-25 13:05:22.170718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.531 [2024-11-25 13:05:22.170729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.531 [2024-11-25 13:05:22.170977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.531 [2024-11-25 13:05:22.171201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.531 [2024-11-25 13:05:22.171210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.531 [2024-11-25 13:05:22.171217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.531 [2024-11-25 13:05:22.171225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.531 [2024-11-25 13:05:22.183979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.531 [2024-11-25 13:05:22.184634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.531 [2024-11-25 13:05:22.184681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.531 [2024-11-25 13:05:22.184692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.531 [2024-11-25 13:05:22.184940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.531 [2024-11-25 13:05:22.185164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.531 [2024-11-25 13:05:22.185173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.531 [2024-11-25 13:05:22.185180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.531 [2024-11-25 13:05:22.185188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.531 [2024-11-25 13:05:22.197936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.531 [2024-11-25 13:05:22.198451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.531 [2024-11-25 13:05:22.198471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.531 [2024-11-25 13:05:22.198481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.531 [2024-11-25 13:05:22.198701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.531 [2024-11-25 13:05:22.198926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.531 [2024-11-25 13:05:22.198935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.531 [2024-11-25 13:05:22.198942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.531 [2024-11-25 13:05:22.198949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.531 [2024-11-25 13:05:22.211721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.531 [2024-11-25 13:05:22.212270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.531 [2024-11-25 13:05:22.212288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.531 [2024-11-25 13:05:22.212297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.531 [2024-11-25 13:05:22.212517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.531 [2024-11-25 13:05:22.212736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.531 [2024-11-25 13:05:22.212745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.531 [2024-11-25 13:05:22.212752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.531 [2024-11-25 13:05:22.212760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.531 [2024-11-25 13:05:22.225514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.531 [2024-11-25 13:05:22.226061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.531 [2024-11-25 13:05:22.226078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.531 [2024-11-25 13:05:22.226086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.531 [2024-11-25 13:05:22.226308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.531 [2024-11-25 13:05:22.226527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.531 [2024-11-25 13:05:22.226535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.531 [2024-11-25 13:05:22.226542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.532 [2024-11-25 13:05:22.226549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.532 [2024-11-25 13:05:22.239294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.532 [2024-11-25 13:05:22.239815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.532 [2024-11-25 13:05:22.239831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.532 [2024-11-25 13:05:22.239839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.532 [2024-11-25 13:05:22.240062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.532 [2024-11-25 13:05:22.240280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.532 [2024-11-25 13:05:22.240288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.532 [2024-11-25 13:05:22.240295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.532 [2024-11-25 13:05:22.240301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.532 [2024-11-25 13:05:22.253249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.532 [2024-11-25 13:05:22.253773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.532 [2024-11-25 13:05:22.253789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.532 [2024-11-25 13:05:22.253796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.532 [2024-11-25 13:05:22.254020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.532 [2024-11-25 13:05:22.254240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.532 [2024-11-25 13:05:22.254248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.532 [2024-11-25 13:05:22.254255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.532 [2024-11-25 13:05:22.254261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.532 [2024-11-25 13:05:22.267209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.532 [2024-11-25 13:05:22.267732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.532 [2024-11-25 13:05:22.267748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.532 [2024-11-25 13:05:22.267755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.532 [2024-11-25 13:05:22.267978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.532 [2024-11-25 13:05:22.268197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.532 [2024-11-25 13:05:22.268208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.532 [2024-11-25 13:05:22.268215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.532 [2024-11-25 13:05:22.268222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.532 [2024-11-25 13:05:22.281170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.532 [2024-11-25 13:05:22.281748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.532 [2024-11-25 13:05:22.281764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.532 [2024-11-25 13:05:22.281771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.532 [2024-11-25 13:05:22.281994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.532 [2024-11-25 13:05:22.282213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.532 [2024-11-25 13:05:22.282221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.532 [2024-11-25 13:05:22.282228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.532 [2024-11-25 13:05:22.282235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.532 [2024-11-25 13:05:22.294974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.532 [2024-11-25 13:05:22.295536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.532 [2024-11-25 13:05:22.295552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.532 [2024-11-25 13:05:22.295559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.532 [2024-11-25 13:05:22.295777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.532 [2024-11-25 13:05:22.296001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.532 [2024-11-25 13:05:22.296010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.532 [2024-11-25 13:05:22.296017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.532 [2024-11-25 13:05:22.296024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.532 [2024-11-25 13:05:22.308770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.532 [2024-11-25 13:05:22.309391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.532 [2024-11-25 13:05:22.309429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.532 [2024-11-25 13:05:22.309439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.532 [2024-11-25 13:05:22.309679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.532 [2024-11-25 13:05:22.309912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.532 [2024-11-25 13:05:22.309922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.532 [2024-11-25 13:05:22.309930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.532 [2024-11-25 13:05:22.309942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.532 [2024-11-25 13:05:22.322691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.532 [2024-11-25 13:05:22.323238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.532 [2024-11-25 13:05:22.323276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.532 [2024-11-25 13:05:22.323287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.532 [2024-11-25 13:05:22.323526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.532 [2024-11-25 13:05:22.323749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.532 [2024-11-25 13:05:22.323757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.532 [2024-11-25 13:05:22.323765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.532 [2024-11-25 13:05:22.323773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.532 [2024-11-25 13:05:22.336546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.532 [2024-11-25 13:05:22.337234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.532 [2024-11-25 13:05:22.337272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.532 [2024-11-25 13:05:22.337282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.532 [2024-11-25 13:05:22.337520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.532 [2024-11-25 13:05:22.337743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.532 [2024-11-25 13:05:22.337753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.532 [2024-11-25 13:05:22.337761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.532 [2024-11-25 13:05:22.337769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.532 [2024-11-25 13:05:22.350527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.532 [2024-11-25 13:05:22.351157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.532 [2024-11-25 13:05:22.351194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.532 [2024-11-25 13:05:22.351205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.532 [2024-11-25 13:05:22.351444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.532 [2024-11-25 13:05:22.351667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.532 [2024-11-25 13:05:22.351676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.532 [2024-11-25 13:05:22.351684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.532 [2024-11-25 13:05:22.351692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.532 [2024-11-25 13:05:22.364451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.532 [2024-11-25 13:05:22.365146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.532 [2024-11-25 13:05:22.365188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.532 [2024-11-25 13:05:22.365200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.532 [2024-11-25 13:05:22.365438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.532 [2024-11-25 13:05:22.365661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.533 [2024-11-25 13:05:22.365670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.533 [2024-11-25 13:05:22.365678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.533 [2024-11-25 13:05:22.365686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.533 [2024-11-25 13:05:22.378446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.533 [2024-11-25 13:05:22.379003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.533 [2024-11-25 13:05:22.379023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.533 [2024-11-25 13:05:22.379031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.533 [2024-11-25 13:05:22.379251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.533 [2024-11-25 13:05:22.379469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.533 [2024-11-25 13:05:22.379478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.533 [2024-11-25 13:05:22.379485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.533 [2024-11-25 13:05:22.379492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.533 [2024-11-25 13:05:22.392240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.533 [2024-11-25 13:05:22.392812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.533 [2024-11-25 13:05:22.392828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.533 [2024-11-25 13:05:22.392836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.533 [2024-11-25 13:05:22.393060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.533 [2024-11-25 13:05:22.393279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.533 [2024-11-25 13:05:22.393287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.533 [2024-11-25 13:05:22.393294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.533 [2024-11-25 13:05:22.393300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.533 [2024-11-25 13:05:22.406095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.533 [2024-11-25 13:05:22.406720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.533 [2024-11-25 13:05:22.406757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.533 [2024-11-25 13:05:22.406768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.533 [2024-11-25 13:05:22.407020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.533 [2024-11-25 13:05:22.407244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.533 [2024-11-25 13:05:22.407253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.533 [2024-11-25 13:05:22.407261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.533 [2024-11-25 13:05:22.407269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.533 [2024-11-25 13:05:22.420017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.533 [2024-11-25 13:05:22.420662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.533 [2024-11-25 13:05:22.420700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.533 [2024-11-25 13:05:22.420712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.533 [2024-11-25 13:05:22.420959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.533 [2024-11-25 13:05:22.421183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.533 [2024-11-25 13:05:22.421191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.533 [2024-11-25 13:05:22.421199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.533 [2024-11-25 13:05:22.421207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.796 [2024-11-25 13:05:22.433971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.796 [2024-11-25 13:05:22.434653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.796 [2024-11-25 13:05:22.434691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.796 [2024-11-25 13:05:22.434702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.796 [2024-11-25 13:05:22.434953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.796 [2024-11-25 13:05:22.435176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.796 [2024-11-25 13:05:22.435185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.796 [2024-11-25 13:05:22.435193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.797 [2024-11-25 13:05:22.435200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.797 [2024-11-25 13:05:22.447955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.797 [2024-11-25 13:05:22.448628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.797 [2024-11-25 13:05:22.448666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.797 [2024-11-25 13:05:22.448676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.797 [2024-11-25 13:05:22.448924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.797 [2024-11-25 13:05:22.449149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.797 [2024-11-25 13:05:22.449164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.797 [2024-11-25 13:05:22.449173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.797 [2024-11-25 13:05:22.449181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.797 [2024-11-25 13:05:22.461940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.797 [2024-11-25 13:05:22.462604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.797 [2024-11-25 13:05:22.462642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.797 [2024-11-25 13:05:22.462654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.797 [2024-11-25 13:05:22.462902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.797 [2024-11-25 13:05:22.463126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.797 [2024-11-25 13:05:22.463136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.797 [2024-11-25 13:05:22.463145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.797 [2024-11-25 13:05:22.463153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.797 [2024-11-25 13:05:22.475909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.797 [2024-11-25 13:05:22.476357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.797 [2024-11-25 13:05:22.476376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.797 [2024-11-25 13:05:22.476383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.797 [2024-11-25 13:05:22.476603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.797 [2024-11-25 13:05:22.476821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.797 [2024-11-25 13:05:22.476830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.797 [2024-11-25 13:05:22.476837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.797 [2024-11-25 13:05:22.476844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.797 [2024-11-25 13:05:22.489793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.797 [2024-11-25 13:05:22.490371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.797 [2024-11-25 13:05:22.490388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.797 [2024-11-25 13:05:22.490396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.797 [2024-11-25 13:05:22.490614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.797 [2024-11-25 13:05:22.490832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.797 [2024-11-25 13:05:22.490840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.797 [2024-11-25 13:05:22.490847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.797 [2024-11-25 13:05:22.490859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.797 [2024-11-25 13:05:22.503605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.797 [2024-11-25 13:05:22.504145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.797 [2024-11-25 13:05:22.504163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.797 [2024-11-25 13:05:22.504171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.797 [2024-11-25 13:05:22.504389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.797 [2024-11-25 13:05:22.504608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.797 [2024-11-25 13:05:22.504615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.797 [2024-11-25 13:05:22.504622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.797 [2024-11-25 13:05:22.504629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.797 [2024-11-25 13:05:22.517581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.797 [2024-11-25 13:05:22.518148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.797 [2024-11-25 13:05:22.518165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.797 [2024-11-25 13:05:22.518173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.797 [2024-11-25 13:05:22.518391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.797 [2024-11-25 13:05:22.518609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.797 [2024-11-25 13:05:22.518617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.797 [2024-11-25 13:05:22.518624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.797 [2024-11-25 13:05:22.518631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.797 [2024-11-25 13:05:22.531385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.797 [2024-11-25 13:05:22.532059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.797 [2024-11-25 13:05:22.532097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.797 [2024-11-25 13:05:22.532108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.797 [2024-11-25 13:05:22.532346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.797 [2024-11-25 13:05:22.532569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.797 [2024-11-25 13:05:22.532578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.797 [2024-11-25 13:05:22.532586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.797 [2024-11-25 13:05:22.532594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.797 [2024-11-25 13:05:22.545356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.797 [2024-11-25 13:05:22.545976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.797 [2024-11-25 13:05:22.546018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.797 [2024-11-25 13:05:22.546031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.797 [2024-11-25 13:05:22.546270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.797 [2024-11-25 13:05:22.546493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.797 [2024-11-25 13:05:22.546502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.797 [2024-11-25 13:05:22.546510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.797 [2024-11-25 13:05:22.546518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.797 [2024-11-25 13:05:22.559276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.797 [2024-11-25 13:05:22.559935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.797 [2024-11-25 13:05:22.559973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.797 [2024-11-25 13:05:22.559984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.797 [2024-11-25 13:05:22.560223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.797 [2024-11-25 13:05:22.560446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.797 [2024-11-25 13:05:22.560455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.797 [2024-11-25 13:05:22.560462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.797 [2024-11-25 13:05:22.560470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.797 [2024-11-25 13:05:22.573231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.797 [2024-11-25 13:05:22.573915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.797 [2024-11-25 13:05:22.573953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.797 [2024-11-25 13:05:22.573965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.797 [2024-11-25 13:05:22.574206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.798 [2024-11-25 13:05:22.574429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.798 [2024-11-25 13:05:22.574438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.798 [2024-11-25 13:05:22.574446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.798 [2024-11-25 13:05:22.574453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.798 [2024-11-25 13:05:22.587213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.798 [2024-11-25 13:05:22.587747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.798 [2024-11-25 13:05:22.587785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.798 [2024-11-25 13:05:22.587796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.798 [2024-11-25 13:05:22.588048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.798 [2024-11-25 13:05:22.588271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.798 [2024-11-25 13:05:22.588280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.798 [2024-11-25 13:05:22.588288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.798 [2024-11-25 13:05:22.588296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.798 [2024-11-25 13:05:22.601049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.798 [2024-11-25 13:05:22.601711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.798 [2024-11-25 13:05:22.601749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.798 [2024-11-25 13:05:22.601760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.798 [2024-11-25 13:05:22.602007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.798 [2024-11-25 13:05:22.602231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.798 [2024-11-25 13:05:22.602240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.798 [2024-11-25 13:05:22.602248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.798 [2024-11-25 13:05:22.602256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.798 [2024-11-25 13:05:22.615019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.798 [2024-11-25 13:05:22.615694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.798 [2024-11-25 13:05:22.615731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.798 [2024-11-25 13:05:22.615742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.798 [2024-11-25 13:05:22.615990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.798 [2024-11-25 13:05:22.616214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.798 [2024-11-25 13:05:22.616222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.798 [2024-11-25 13:05:22.616230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.798 [2024-11-25 13:05:22.616238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.798 [2024-11-25 13:05:22.629002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.798 [2024-11-25 13:05:22.629673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.798 [2024-11-25 13:05:22.629711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.798 [2024-11-25 13:05:22.629722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.798 [2024-11-25 13:05:22.629969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.798 [2024-11-25 13:05:22.630193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.798 [2024-11-25 13:05:22.630206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.798 [2024-11-25 13:05:22.630214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.798 [2024-11-25 13:05:22.630222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.798 [2024-11-25 13:05:22.642977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.798 [2024-11-25 13:05:22.643660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.798 [2024-11-25 13:05:22.643697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.798 [2024-11-25 13:05:22.643708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.798 [2024-11-25 13:05:22.643955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.798 [2024-11-25 13:05:22.644179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.798 [2024-11-25 13:05:22.644188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.798 [2024-11-25 13:05:22.644195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.798 [2024-11-25 13:05:22.644203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.798 [2024-11-25 13:05:22.656953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.798 [2024-11-25 13:05:22.657626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.798 [2024-11-25 13:05:22.657664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.798 [2024-11-25 13:05:22.657675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.798 [2024-11-25 13:05:22.657923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.798 [2024-11-25 13:05:22.658147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.798 [2024-11-25 13:05:22.658159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.798 [2024-11-25 13:05:22.658167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.798 [2024-11-25 13:05:22.658176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.798 [2024-11-25 13:05:22.670934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.798 [2024-11-25 13:05:22.671476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.798 [2024-11-25 13:05:22.671495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.798 [2024-11-25 13:05:22.671503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.798 [2024-11-25 13:05:22.671722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.798 [2024-11-25 13:05:22.671947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.798 [2024-11-25 13:05:22.671956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.798 [2024-11-25 13:05:22.671963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.798 [2024-11-25 13:05:22.671974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 828119 Killed "${NVMF_APP[@]}" "$@" 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:42.798 [2024-11-25 13:05:22.684725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:42.798 [2024-11-25 13:05:22.685260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:42.798 [2024-11-25 13:05:22.685278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:42.798 [2024-11-25 13:05:22.685285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:42.798 [2024-11-25 13:05:22.685504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:42.798 [2024-11-25 13:05:22.685723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:42.798 [2024-11-25 13:05:22.685731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:42.798 [2024-11-25 13:05:22.685738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:42.798 [2024-11-25 13:05:22.685745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=829738 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 829738 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 829738 ']' 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.798 13:05:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.062 [2024-11-25 13:05:22.698711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.062 [2024-11-25 13:05:22.699177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.062 [2024-11-25 13:05:22.699193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.062 [2024-11-25 13:05:22.699202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.062 [2024-11-25 13:05:22.699422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.062 [2024-11-25 13:05:22.699640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.062 [2024-11-25 13:05:22.699648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.062 [2024-11-25 13:05:22.699659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.062 [2024-11-25 13:05:22.699667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.062 [2024-11-25 13:05:22.712637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.062 [2024-11-25 13:05:22.713163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.062 [2024-11-25 13:05:22.713180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.062 [2024-11-25 13:05:22.713190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.062 [2024-11-25 13:05:22.713416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.062 [2024-11-25 13:05:22.713636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.062 [2024-11-25 13:05:22.713647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.062 [2024-11-25 13:05:22.713654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.062 [2024-11-25 13:05:22.713662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.062 [2024-11-25 13:05:22.726555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.062 [2024-11-25 13:05:22.727223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.062 [2024-11-25 13:05:22.727261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.062 [2024-11-25 13:05:22.727274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.062 [2024-11-25 13:05:22.727516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.062 [2024-11-25 13:05:22.727740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.062 [2024-11-25 13:05:22.727749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.062 [2024-11-25 13:05:22.727756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.062 [2024-11-25 13:05:22.727764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.062 [2024-11-25 13:05:22.740525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.062 [2024-11-25 13:05:22.741180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.062 [2024-11-25 13:05:22.741217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.062 [2024-11-25 13:05:22.741229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.062 [2024-11-25 13:05:22.741468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.062 [2024-11-25 13:05:22.741691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.062 [2024-11-25 13:05:22.741700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.062 [2024-11-25 13:05:22.741708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.062 [2024-11-25 13:05:22.741716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.062 [2024-11-25 13:05:22.744265] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:30:43.062 [2024-11-25 13:05:22.744324] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.062 [2024-11-25 13:05:22.754479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.062 [2024-11-25 13:05:22.754989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.062 [2024-11-25 13:05:22.755027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.062 [2024-11-25 13:05:22.755039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.062 [2024-11-25 13:05:22.755280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.062 [2024-11-25 13:05:22.755503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.062 [2024-11-25 13:05:22.755513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.062 [2024-11-25 13:05:22.755521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.062 [2024-11-25 13:05:22.755529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.062 [2024-11-25 13:05:22.768311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.062 [2024-11-25 13:05:22.768764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.062 [2024-11-25 13:05:22.768783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.062 [2024-11-25 13:05:22.768792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.062 [2024-11-25 13:05:22.769019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.062 [2024-11-25 13:05:22.769239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.062 [2024-11-25 13:05:22.769248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.062 [2024-11-25 13:05:22.769257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.062 [2024-11-25 13:05:22.769264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.062 [2024-11-25 13:05:22.782171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.062 [2024-11-25 13:05:22.782810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.062 [2024-11-25 13:05:22.782848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.062 [2024-11-25 13:05:22.782860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.062 [2024-11-25 13:05:22.783110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.062 [2024-11-25 13:05:22.783334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.062 [2024-11-25 13:05:22.783343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.062 [2024-11-25 13:05:22.783351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.062 [2024-11-25 13:05:22.783359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.062 [2024-11-25 13:05:22.796130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.062 [2024-11-25 13:05:22.796771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.062 [2024-11-25 13:05:22.796809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.062 [2024-11-25 13:05:22.796822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.062 [2024-11-25 13:05:22.797071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.062 [2024-11-25 13:05:22.797295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.062 [2024-11-25 13:05:22.797305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.062 [2024-11-25 13:05:22.797313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.062 [2024-11-25 13:05:22.797321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.062 [2024-11-25 13:05:22.810099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.062 [2024-11-25 13:05:22.810635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.062 [2024-11-25 13:05:22.810672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.062 [2024-11-25 13:05:22.810685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.062 [2024-11-25 13:05:22.810932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.062 [2024-11-25 13:05:22.811156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.063 [2024-11-25 13:05:22.811164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.063 [2024-11-25 13:05:22.811173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.063 [2024-11-25 13:05:22.811181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.063 [2024-11-25 13:05:22.823939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.063 [2024-11-25 13:05:22.824484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.063 [2024-11-25 13:05:22.824505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.063 [2024-11-25 13:05:22.824513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.063 [2024-11-25 13:05:22.824733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.063 [2024-11-25 13:05:22.824956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.063 [2024-11-25 13:05:22.824966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.063 [2024-11-25 13:05:22.824974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.063 [2024-11-25 13:05:22.824982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.063 [2024-11-25 13:05:22.837744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.063 [2024-11-25 13:05:22.838356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.063 [2024-11-25 13:05:22.838394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.063 [2024-11-25 13:05:22.838409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.063 [2024-11-25 13:05:22.838649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.063 [2024-11-25 13:05:22.838881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.063 [2024-11-25 13:05:22.838891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.063 [2024-11-25 13:05:22.838899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.063 [2024-11-25 13:05:22.838907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.063 [2024-11-25 13:05:22.846407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:43.063 [2024-11-25 13:05:22.851675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.063 [2024-11-25 13:05:22.852200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.063 [2024-11-25 13:05:22.852220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.063 [2024-11-25 13:05:22.852229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.063 [2024-11-25 13:05:22.852450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.063 [2024-11-25 13:05:22.852669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.063 [2024-11-25 13:05:22.852677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.063 [2024-11-25 13:05:22.852684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.063 [2024-11-25 13:05:22.852691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.063 [2024-11-25 13:05:22.865665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.063 [2024-11-25 13:05:22.866268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.063 [2024-11-25 13:05:22.866286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.063 [2024-11-25 13:05:22.866293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.063 [2024-11-25 13:05:22.866512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.063 [2024-11-25 13:05:22.866730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.063 [2024-11-25 13:05:22.866740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.063 [2024-11-25 13:05:22.866747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.063 [2024-11-25 13:05:22.866754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.063 [2024-11-25 13:05:22.875630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.063 [2024-11-25 13:05:22.875653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.063 [2024-11-25 13:05:22.875660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.063 [2024-11-25 13:05:22.875666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.063 [2024-11-25 13:05:22.875670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.063 [2024-11-25 13:05:22.876754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.063 [2024-11-25 13:05:22.876914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.063 [2024-11-25 13:05:22.877109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.063 [2024-11-25 13:05:22.879514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.063 [2024-11-25 13:05:22.880070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.063 [2024-11-25 13:05:22.880087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.063 [2024-11-25 13:05:22.880095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.063 [2024-11-25 13:05:22.880313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.063 [2024-11-25 13:05:22.880532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.063 [2024-11-25 13:05:22.880540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.063 [2024-11-25 13:05:22.880547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.063 [2024-11-25 13:05:22.880554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.063 [2024-11-25 13:05:22.893308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.063 [2024-11-25 13:05:22.893895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.063 [2024-11-25 13:05:22.893938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.063 [2024-11-25 13:05:22.893952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.063 [2024-11-25 13:05:22.894197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.063 [2024-11-25 13:05:22.894421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.063 [2024-11-25 13:05:22.894429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.063 [2024-11-25 13:05:22.894438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.063 [2024-11-25 13:05:22.894446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.063 [2024-11-25 13:05:22.907231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.063 [2024-11-25 13:05:22.907942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.063 [2024-11-25 13:05:22.907981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.063 [2024-11-25 13:05:22.907994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.063 [2024-11-25 13:05:22.908237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.063 [2024-11-25 13:05:22.908460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.063 [2024-11-25 13:05:22.908469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.063 [2024-11-25 13:05:22.908477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.063 [2024-11-25 13:05:22.908486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.063 [2024-11-25 13:05:22.921056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.063 [2024-11-25 13:05:22.921729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.063 [2024-11-25 13:05:22.921768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.063 [2024-11-25 13:05:22.921779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.063 [2024-11-25 13:05:22.922026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.063 [2024-11-25 13:05:22.922250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.063 [2024-11-25 13:05:22.922259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.063 [2024-11-25 13:05:22.922267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.063 [2024-11-25 13:05:22.922276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.063 [2024-11-25 13:05:22.935047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.063 [2024-11-25 13:05:22.935735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.063 [2024-11-25 13:05:22.935774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.063 [2024-11-25 13:05:22.935785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.063 [2024-11-25 13:05:22.936032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.063 [2024-11-25 13:05:22.936256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.063 [2024-11-25 13:05:22.936265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.064 [2024-11-25 13:05:22.936273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.064 [2024-11-25 13:05:22.936281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.064 [2024-11-25 13:05:22.949044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.064 [2024-11-25 13:05:22.949611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.064 [2024-11-25 13:05:22.949630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.064 [2024-11-25 13:05:22.949639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.064 [2024-11-25 13:05:22.949859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.064 [2024-11-25 13:05:22.950085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.064 [2024-11-25 13:05:22.950094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.064 [2024-11-25 13:05:22.950101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.064 [2024-11-25 13:05:22.950108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.064 [2024-11-25 13:05:22.962855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.326 [2024-11-25 13:05:22.963507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.326 [2024-11-25 13:05:22.963547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.326 [2024-11-25 13:05:22.963563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.326 [2024-11-25 13:05:22.963803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.326 [2024-11-25 13:05:22.964035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.326 [2024-11-25 13:05:22.964046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.326 [2024-11-25 13:05:22.964054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.326 [2024-11-25 13:05:22.964063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.326 [2024-11-25 13:05:22.976654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.326 [2024-11-25 13:05:22.977346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.326 [2024-11-25 13:05:22.977384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.326 [2024-11-25 13:05:22.977395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.326 [2024-11-25 13:05:22.977634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.326 [2024-11-25 13:05:22.977858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.326 [2024-11-25 13:05:22.977876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.326 [2024-11-25 13:05:22.977884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.326 [2024-11-25 13:05:22.977892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.326 [2024-11-25 13:05:22.990650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.326 [2024-11-25 13:05:22.991034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.326 [2024-11-25 13:05:22.991054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.326 [2024-11-25 13:05:22.991062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.326 [2024-11-25 13:05:22.991282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.326 [2024-11-25 13:05:22.991501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.326 [2024-11-25 13:05:22.991509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.326 [2024-11-25 13:05:22.991517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.326 [2024-11-25 13:05:22.991524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.326 4895.83 IOPS, 19.12 MiB/s [2024-11-25T12:05:23.229Z] [2024-11-25 13:05:23.005101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.326 [2024-11-25 13:05:23.005760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.326 [2024-11-25 13:05:23.005797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.326 [2024-11-25 13:05:23.005808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.326 [2024-11-25 13:05:23.006075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.326 [2024-11-25 13:05:23.006300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.326 [2024-11-25 13:05:23.006309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.326 [2024-11-25 13:05:23.006317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.326 [2024-11-25 13:05:23.006325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.326 [2024-11-25 13:05:23.019081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.326 [2024-11-25 13:05:23.019751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.326 [2024-11-25 13:05:23.019789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.326 [2024-11-25 13:05:23.019800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.326 [2024-11-25 13:05:23.020047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.326 [2024-11-25 13:05:23.020271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.326 [2024-11-25 13:05:23.020280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.326 [2024-11-25 13:05:23.020288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.326 [2024-11-25 13:05:23.020296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.326 [2024-11-25 13:05:23.033066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.326 [2024-11-25 13:05:23.033739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.326 [2024-11-25 13:05:23.033777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.326 [2024-11-25 13:05:23.033788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.327 [2024-11-25 13:05:23.034035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.327 [2024-11-25 13:05:23.034259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.327 [2024-11-25 13:05:23.034268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.327 [2024-11-25 13:05:23.034275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.327 [2024-11-25 13:05:23.034284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.327 [2024-11-25 13:05:23.047040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.327 [2024-11-25 13:05:23.047649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.327 [2024-11-25 13:05:23.047668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.327 [2024-11-25 13:05:23.047676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.327 [2024-11-25 13:05:23.047902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.327 [2024-11-25 13:05:23.048122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.327 [2024-11-25 13:05:23.048130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.327 [2024-11-25 13:05:23.048142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.327 [2024-11-25 13:05:23.048149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.327 [2024-11-25 13:05:23.060903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.327 [2024-11-25 13:05:23.061563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.327 [2024-11-25 13:05:23.061601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.327 [2024-11-25 13:05:23.061612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.327 [2024-11-25 13:05:23.061851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.327 [2024-11-25 13:05:23.062083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.327 [2024-11-25 13:05:23.062093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.327 [2024-11-25 13:05:23.062101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.327 [2024-11-25 13:05:23.062109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.327 [2024-11-25 13:05:23.074869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.327 [2024-11-25 13:05:23.075436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.327 [2024-11-25 13:05:23.075474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.327 [2024-11-25 13:05:23.075487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.327 [2024-11-25 13:05:23.075727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.327 [2024-11-25 13:05:23.075958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.327 [2024-11-25 13:05:23.075969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.327 [2024-11-25 13:05:23.075977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.327 [2024-11-25 13:05:23.075985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.327 [2024-11-25 13:05:23.088741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.327 [2024-11-25 13:05:23.089262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.327 [2024-11-25 13:05:23.089300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.327 [2024-11-25 13:05:23.089311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.327 [2024-11-25 13:05:23.089551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.327 [2024-11-25 13:05:23.089773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.327 [2024-11-25 13:05:23.089783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.327 [2024-11-25 13:05:23.089790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.327 [2024-11-25 13:05:23.089799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.327 [2024-11-25 13:05:23.102574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.327 [2024-11-25 13:05:23.103129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.327 [2024-11-25 13:05:23.103168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.327 [2024-11-25 13:05:23.103179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.327 [2024-11-25 13:05:23.103418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.327 [2024-11-25 13:05:23.103641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.327 [2024-11-25 13:05:23.103650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.327 [2024-11-25 13:05:23.103658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.327 [2024-11-25 13:05:23.103666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.327 [2024-11-25 13:05:23.116440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.327 [2024-11-25 13:05:23.117137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.327 [2024-11-25 13:05:23.117176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.327 [2024-11-25 13:05:23.117187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.327 [2024-11-25 13:05:23.117427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.327 [2024-11-25 13:05:23.117651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.327 [2024-11-25 13:05:23.117660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.327 [2024-11-25 13:05:23.117667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.327 [2024-11-25 13:05:23.117675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.327 [2024-11-25 13:05:23.130242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.327 [2024-11-25 13:05:23.130887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.327 [2024-11-25 13:05:23.130927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.327 [2024-11-25 13:05:23.130941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.327 [2024-11-25 13:05:23.131184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.327 [2024-11-25 13:05:23.131407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.327 [2024-11-25 13:05:23.131417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.327 [2024-11-25 13:05:23.131425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.327 [2024-11-25 13:05:23.131433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.327 [2024-11-25 13:05:23.144192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.327 [2024-11-25 13:05:23.144738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.327 [2024-11-25 13:05:23.144780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.327 [2024-11-25 13:05:23.144792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.327 [2024-11-25 13:05:23.145039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.327 [2024-11-25 13:05:23.145263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.327 [2024-11-25 13:05:23.145272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.327 [2024-11-25 13:05:23.145280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.327 [2024-11-25 13:05:23.145288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.327 [2024-11-25 13:05:23.158049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.327 [2024-11-25 13:05:23.158489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.327 [2024-11-25 13:05:23.158509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.327 [2024-11-25 13:05:23.158517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.327 [2024-11-25 13:05:23.158737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.327 [2024-11-25 13:05:23.158961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.327 [2024-11-25 13:05:23.158971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.328 [2024-11-25 13:05:23.158978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.328 [2024-11-25 13:05:23.158985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.328 [2024-11-25 13:05:23.171945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.328 [2024-11-25 13:05:23.172503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.328 [2024-11-25 13:05:23.172519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.328 [2024-11-25 13:05:23.172527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.328 [2024-11-25 13:05:23.172745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.328 [2024-11-25 13:05:23.172970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.328 [2024-11-25 13:05:23.172980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.328 [2024-11-25 13:05:23.172987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.328 [2024-11-25 13:05:23.172994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.328 [2024-11-25 13:05:23.185739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.328 [2024-11-25 13:05:23.186213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.328 [2024-11-25 13:05:23.186252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.328 [2024-11-25 13:05:23.186263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.328 [2024-11-25 13:05:23.186505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.328 [2024-11-25 13:05:23.186729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.328 [2024-11-25 13:05:23.186737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.328 [2024-11-25 13:05:23.186745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.328 [2024-11-25 13:05:23.186753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.328 [2024-11-25 13:05:23.199728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.328 [2024-11-25 13:05:23.200318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.328 [2024-11-25 13:05:23.200338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.328 [2024-11-25 13:05:23.200346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.328 [2024-11-25 13:05:23.200565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.328 [2024-11-25 13:05:23.200784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.328 [2024-11-25 13:05:23.200794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.328 [2024-11-25 13:05:23.200802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.328 [2024-11-25 13:05:23.200809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.328 [2024-11-25 13:05:23.213572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.328 [2024-11-25 13:05:23.214269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.328 [2024-11-25 13:05:23.214307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.328 [2024-11-25 13:05:23.214319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.328 [2024-11-25 13:05:23.214558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.328 [2024-11-25 13:05:23.214781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.328 [2024-11-25 13:05:23.214790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.328 [2024-11-25 13:05:23.214798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.328 [2024-11-25 13:05:23.214806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.591 [2024-11-25 13:05:23.227611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.591 [2024-11-25 13:05:23.228159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.591 [2024-11-25 13:05:23.228178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.591 [2024-11-25 13:05:23.228187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.591 [2024-11-25 13:05:23.228407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.591 [2024-11-25 13:05:23.228626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.591 [2024-11-25 13:05:23.228635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.591 [2024-11-25 13:05:23.228647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.591 [2024-11-25 13:05:23.228654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.591 [2024-11-25 13:05:23.241408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.591 [2024-11-25 13:05:23.241832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.591 [2024-11-25 13:05:23.241851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.591 [2024-11-25 13:05:23.241859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.592 [2024-11-25 13:05:23.242085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.592 [2024-11-25 13:05:23.242303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.592 [2024-11-25 13:05:23.242311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.592 [2024-11-25 13:05:23.242318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.592 [2024-11-25 13:05:23.242325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.592 [2024-11-25 13:05:23.255286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.592 [2024-11-25 13:05:23.255925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.592 [2024-11-25 13:05:23.255964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.592 [2024-11-25 13:05:23.255977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.592 [2024-11-25 13:05:23.256217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.592 [2024-11-25 13:05:23.256441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.592 [2024-11-25 13:05:23.256450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.592 [2024-11-25 13:05:23.256458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.592 [2024-11-25 13:05:23.256466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.592 [2024-11-25 13:05:23.269228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.592 [2024-11-25 13:05:23.269768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.592 [2024-11-25 13:05:23.269806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.592 [2024-11-25 13:05:23.269817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.592 [2024-11-25 13:05:23.270064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.592 [2024-11-25 13:05:23.270289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.592 [2024-11-25 13:05:23.270298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.592 [2024-11-25 13:05:23.270306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.592 [2024-11-25 13:05:23.270314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.592 [2024-11-25 13:05:23.283080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.592 [2024-11-25 13:05:23.283647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.592 [2024-11-25 13:05:23.283666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.592 [2024-11-25 13:05:23.283674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.592 [2024-11-25 13:05:23.283900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.592 [2024-11-25 13:05:23.284121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.592 [2024-11-25 13:05:23.284129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.592 [2024-11-25 13:05:23.284137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.592 [2024-11-25 13:05:23.284144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.592 [2024-11-25 13:05:23.296899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.592 [2024-11-25 13:05:23.297413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.592 [2024-11-25 13:05:23.297452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.592 [2024-11-25 13:05:23.297463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.592 [2024-11-25 13:05:23.297702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.592 [2024-11-25 13:05:23.297934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.592 [2024-11-25 13:05:23.297944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.592 [2024-11-25 13:05:23.297952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.592 [2024-11-25 13:05:23.297960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.592 [2024-11-25 13:05:23.310732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.592 [2024-11-25 13:05:23.311441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.592 [2024-11-25 13:05:23.311479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.592 [2024-11-25 13:05:23.311490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.592 [2024-11-25 13:05:23.311729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.592 [2024-11-25 13:05:23.311961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.592 [2024-11-25 13:05:23.311971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.592 [2024-11-25 13:05:23.311979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.592 [2024-11-25 13:05:23.311987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.592 [2024-11-25 13:05:23.324530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.592 [2024-11-25 13:05:23.325188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.592 [2024-11-25 13:05:23.325232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.592 [2024-11-25 13:05:23.325243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.592 [2024-11-25 13:05:23.325482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.592 [2024-11-25 13:05:23.325704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.592 [2024-11-25 13:05:23.325713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.592 [2024-11-25 13:05:23.325721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.592 [2024-11-25 13:05:23.325729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.592 [2024-11-25 13:05:23.338493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.592 [2024-11-25 13:05:23.339144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.592 [2024-11-25 13:05:23.339182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.592 [2024-11-25 13:05:23.339193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.592 [2024-11-25 13:05:23.339432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.592 [2024-11-25 13:05:23.339655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.592 [2024-11-25 13:05:23.339664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.592 [2024-11-25 13:05:23.339672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.592 [2024-11-25 13:05:23.339679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.592 [2024-11-25 13:05:23.352443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.592 [2024-11-25 13:05:23.352997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.592 [2024-11-25 13:05:23.353017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.592 [2024-11-25 13:05:23.353025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.592 [2024-11-25 13:05:23.353244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.592 [2024-11-25 13:05:23.353464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.592 [2024-11-25 13:05:23.353472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.592 [2024-11-25 13:05:23.353479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.592 [2024-11-25 13:05:23.353487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.592 [2024-11-25 13:05:23.366293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.592 [2024-11-25 13:05:23.367009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.592 [2024-11-25 13:05:23.367047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.592 [2024-11-25 13:05:23.367058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.593 [2024-11-25 13:05:23.367302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.593 [2024-11-25 13:05:23.367525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.593 [2024-11-25 13:05:23.367534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.593 [2024-11-25 13:05:23.367542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.593 [2024-11-25 13:05:23.367550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.593 [2024-11-25 13:05:23.380132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.593 [2024-11-25 13:05:23.380839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.593 [2024-11-25 13:05:23.380884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.593 [2024-11-25 13:05:23.380896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.593 [2024-11-25 13:05:23.381144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.593 [2024-11-25 13:05:23.381377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.593 [2024-11-25 13:05:23.381387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.593 [2024-11-25 13:05:23.381395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.593 [2024-11-25 13:05:23.381403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.593 [2024-11-25 13:05:23.393976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.593 [2024-11-25 13:05:23.394529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.593 [2024-11-25 13:05:23.394549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.593 [2024-11-25 13:05:23.394557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.593 [2024-11-25 13:05:23.394777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.593 [2024-11-25 13:05:23.395006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.593 [2024-11-25 13:05:23.395016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.593 [2024-11-25 13:05:23.395023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.593 [2024-11-25 13:05:23.395030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.593 [2024-11-25 13:05:23.407786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.593 [2024-11-25 13:05:23.408304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.593 [2024-11-25 13:05:23.408343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.593 [2024-11-25 13:05:23.408354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.593 [2024-11-25 13:05:23.408592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.593 [2024-11-25 13:05:23.408816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.593 [2024-11-25 13:05:23.408825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.593 [2024-11-25 13:05:23.408838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.593 [2024-11-25 13:05:23.408846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.593 [2024-11-25 13:05:23.421681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.593 [2024-11-25 13:05:23.422351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.593 [2024-11-25 13:05:23.422389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.593 [2024-11-25 13:05:23.422401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.593 [2024-11-25 13:05:23.422640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.593 [2024-11-25 13:05:23.422873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.593 [2024-11-25 13:05:23.422883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.593 [2024-11-25 13:05:23.422891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.593 [2024-11-25 13:05:23.422900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.593 [2024-11-25 13:05:23.435666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.593 [2024-11-25 13:05:23.436234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.593 [2024-11-25 13:05:23.436272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.593 [2024-11-25 13:05:23.436284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.593 [2024-11-25 13:05:23.436525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.593 [2024-11-25 13:05:23.436748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.593 [2024-11-25 13:05:23.436757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.593 [2024-11-25 13:05:23.436765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.593 [2024-11-25 13:05:23.436773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.593 [2024-11-25 13:05:23.449523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.593 [2024-11-25 13:05:23.449820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.593 [2024-11-25 13:05:23.449839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.593 [2024-11-25 13:05:23.449847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.593 [2024-11-25 13:05:23.450071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.593 [2024-11-25 13:05:23.450290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.593 [2024-11-25 13:05:23.450299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.593 [2024-11-25 13:05:23.450306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.593 [2024-11-25 13:05:23.450313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.593 [2024-11-25 13:05:23.463479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.593 [2024-11-25 13:05:23.464175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.593 [2024-11-25 13:05:23.464213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.593 [2024-11-25 13:05:23.464225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.593 [2024-11-25 13:05:23.464465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.593 [2024-11-25 13:05:23.464689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.593 [2024-11-25 13:05:23.464698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.593 [2024-11-25 13:05:23.464707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.593 [2024-11-25 13:05:23.464716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.593 [2024-11-25 13:05:23.477272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.593 [2024-11-25 13:05:23.477724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.593 [2024-11-25 13:05:23.477743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.593 [2024-11-25 13:05:23.477751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.593 [2024-11-25 13:05:23.477976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.593 [2024-11-25 13:05:23.478195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.593 [2024-11-25 13:05:23.478203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.593 [2024-11-25 13:05:23.478211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.593 [2024-11-25 13:05:23.478218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.593 [2024-11-25 13:05:23.491162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.593 [2024-11-25 13:05:23.491608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.593 [2024-11-25 13:05:23.491646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.593 [2024-11-25 13:05:23.491659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.593 [2024-11-25 13:05:23.491909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.593 [2024-11-25 13:05:23.492133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.593 [2024-11-25 13:05:23.492142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.593 [2024-11-25 13:05:23.492150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.593 [2024-11-25 13:05:23.492158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.855 [2024-11-25 13:05:23.505124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.855 [2024-11-25 13:05:23.505687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.855 [2024-11-25 13:05:23.505729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.855 [2024-11-25 13:05:23.505742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.855 [2024-11-25 13:05:23.505989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.855 [2024-11-25 13:05:23.506213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.855 [2024-11-25 13:05:23.506222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.855 [2024-11-25 13:05:23.506230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.855 [2024-11-25 13:05:23.506237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.855 [2024-11-25 13:05:23.518993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.855 [2024-11-25 13:05:23.519693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.855 [2024-11-25 13:05:23.519730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.855 [2024-11-25 13:05:23.519742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.855 [2024-11-25 13:05:23.519989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.855 [2024-11-25 13:05:23.520212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.855 [2024-11-25 13:05:23.520221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.855 [2024-11-25 13:05:23.520229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.855 [2024-11-25 13:05:23.520237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.855 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.855 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:43.855 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:43.855 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:43.855 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.855 [2024-11-25 13:05:23.532792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.855 [2024-11-25 13:05:23.533507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.855 [2024-11-25 13:05:23.533546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.855 [2024-11-25 13:05:23.533558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.855 [2024-11-25 13:05:23.533798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.856 [2024-11-25 13:05:23.534029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.856 [2024-11-25 13:05:23.534038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.856 [2024-11-25 13:05:23.534046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.856 [2024-11-25 13:05:23.534054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.856 [2024-11-25 13:05:23.546588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.856 [2024-11-25 13:05:23.547157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.856 [2024-11-25 13:05:23.547177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.856 [2024-11-25 13:05:23.547186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.856 [2024-11-25 13:05:23.547405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.856 [2024-11-25 13:05:23.547624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.856 [2024-11-25 13:05:23.547631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.856 [2024-11-25 13:05:23.547638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.856 [2024-11-25 13:05:23.547645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.856 [2024-11-25 13:05:23.560393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.856 [2024-11-25 13:05:23.560820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.856 [2024-11-25 13:05:23.560836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.856 [2024-11-25 13:05:23.560844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.856 [2024-11-25 13:05:23.561067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.856 [2024-11-25 13:05:23.561286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.856 [2024-11-25 13:05:23.561294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.856 [2024-11-25 13:05:23.561301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.856 [2024-11-25 13:05:23.561307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.856 [2024-11-25 13:05:23.572931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.856 [2024-11-25 13:05:23.574259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.856 [2024-11-25 13:05:23.574841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.856 [2024-11-25 13:05:23.574857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.856 [2024-11-25 13:05:23.574869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.856 [2024-11-25 13:05:23.575087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.856 [2024-11-25 13:05:23.575305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.856 [2024-11-25 13:05:23.575313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.856 [2024-11-25 13:05:23.575320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.856 [2024-11-25 13:05:23.575335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.856 [2024-11-25 13:05:23.588082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.856 [2024-11-25 13:05:23.588747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.856 [2024-11-25 13:05:23.588785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.856 [2024-11-25 13:05:23.588797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.856 [2024-11-25 13:05:23.589044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.856 [2024-11-25 13:05:23.589268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.856 [2024-11-25 13:05:23.589277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.856 [2024-11-25 13:05:23.589285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.856 [2024-11-25 13:05:23.589294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.856 [2024-11-25 13:05:23.602037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.856 [2024-11-25 13:05:23.602634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.856 [2024-11-25 13:05:23.602653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.856 [2024-11-25 13:05:23.602662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.856 [2024-11-25 13:05:23.602886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.856 [2024-11-25 13:05:23.603106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.856 [2024-11-25 13:05:23.603113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.856 [2024-11-25 13:05:23.603121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.856 [2024-11-25 13:05:23.603128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.856 Malloc0 00:30:43.856 [2024-11-25 13:05:23.615887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.856 [2024-11-25 13:05:23.616551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.856 [2024-11-25 13:05:23.616590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.856 [2024-11-25 13:05:23.616601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:43.856 [2024-11-25 13:05:23.616840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.856 [2024-11-25 13:05:23.617073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.856 [2024-11-25 13:05:23.617092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.856 [2024-11-25 13:05:23.617101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.856 [2024-11-25 13:05:23.617109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.856 [2024-11-25 13:05:23.629875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.856 [2024-11-25 13:05:23.630308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.856 [2024-11-25 13:05:23.630328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.856 [2024-11-25 13:05:23.630336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.856 [2024-11-25 13:05:23.630555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.856 [2024-11-25 13:05:23.630774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.856 [2024-11-25 13:05:23.630782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.856 [2024-11-25 13:05:23.630789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.856 [2024-11-25 13:05:23.630796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.856 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.856 [2024-11-25 13:05:23.643750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:43.856 [2024-11-25 13:05:23.644456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.856 [2024-11-25 13:05:23.644494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d20040 with addr=10.0.0.2, port=4420 00:30:43.856 [2024-11-25 13:05:23.644505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d20040 is same with the state(6) to be set 00:30:43.856 [2024-11-25 13:05:23.644744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20040 (9): Bad file descriptor 00:30:43.857 [2024-11-25 13:05:23.644975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:43.857 [2024-11-25 13:05:23.644985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:43.857 [2024-11-25 13:05:23.644993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:43.857 [2024-11-25 13:05:23.645001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:43.857 [2024-11-25 13:05:23.647335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.857 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.857 13:05:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 828713 00:30:43.857 [2024-11-25 13:05:23.657543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:44.117 [2024-11-25 13:05:23.817000] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:45.503 4477.86 IOPS, 17.49 MiB/s [2024-11-25T12:05:26.347Z] 5328.12 IOPS, 20.81 MiB/s [2024-11-25T12:05:27.290Z] 5971.11 IOPS, 23.32 MiB/s [2024-11-25T12:05:28.233Z] 6490.00 IOPS, 25.35 MiB/s [2024-11-25T12:05:29.175Z] 6906.55 IOPS, 26.98 MiB/s [2024-11-25T12:05:30.133Z] 7262.83 IOPS, 28.37 MiB/s [2024-11-25T12:05:31.076Z] 7577.46 IOPS, 29.60 MiB/s [2024-11-25T12:05:32.458Z] 7835.86 IOPS, 30.61 MiB/s 00:30:52.555 Latency(us) 00:30:52.555 [2024-11-25T12:05:32.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.555 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:52.555 Verification LBA range: start 0x0 length 0x4000 00:30:52.555 Nvme1n1 : 15.01 8061.15 31.49 10116.85 0.00 7016.25 795.31 15291.73 00:30:52.555 [2024-11-25T12:05:32.458Z] =================================================================================================================== 00:30:52.555 [2024-11-25T12:05:32.458Z] Total : 8061.15 31.49 10116.85 0.00 7016.25 795.31 15291.73 00:30:52.555 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:52.555 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.555 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.555 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:52.555 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.555 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:52.555 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:52.555 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:52.555 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:52.555 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:52.556 rmmod nvme_tcp 00:30:52.556 rmmod nvme_fabrics 00:30:52.556 rmmod nvme_keyring 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 829738 ']' 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 829738 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 829738 ']' 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 829738 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 829738 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 829738' 00:30:52.556 killing process with pid 829738 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 829738 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 829738 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.556 13:05:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:55.099 00:30:55.099 real 0m28.556s 00:30:55.099 user 1m2.974s 00:30:55.099 sys 0m7.929s 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:55.099 ************************************ 00:30:55.099 END TEST nvmf_bdevperf 00:30:55.099 ************************************ 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:55.099 ************************************ 00:30:55.099 START TEST nvmf_target_disconnect 00:30:55.099 ************************************ 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:55.099 * Looking for test storage... 00:30:55.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:55.099 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:55.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.100 --rc genhtml_branch_coverage=1 00:30:55.100 --rc genhtml_function_coverage=1 00:30:55.100 --rc genhtml_legend=1 00:30:55.100 --rc geninfo_all_blocks=1 00:30:55.100 --rc geninfo_unexecuted_blocks=1 00:30:55.100 00:30:55.100 ' 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:55.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.100 --rc genhtml_branch_coverage=1 00:30:55.100 --rc genhtml_function_coverage=1 00:30:55.100 --rc genhtml_legend=1 00:30:55.100 --rc geninfo_all_blocks=1 00:30:55.100 --rc geninfo_unexecuted_blocks=1 00:30:55.100 00:30:55.100 ' 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:55.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.100 --rc genhtml_branch_coverage=1 00:30:55.100 --rc genhtml_function_coverage=1 00:30:55.100 --rc genhtml_legend=1 00:30:55.100 --rc geninfo_all_blocks=1 00:30:55.100 --rc geninfo_unexecuted_blocks=1 00:30:55.100 00:30:55.100 ' 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:55.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.100 --rc genhtml_branch_coverage=1 00:30:55.100 --rc genhtml_function_coverage=1 00:30:55.100 --rc genhtml_legend=1 00:30:55.100 --rc geninfo_all_blocks=1 00:30:55.100 --rc geninfo_unexecuted_blocks=1 00:30:55.100 00:30:55.100 ' 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:55.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:55.100 13:05:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:03.253 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.253 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:03.254 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:03.254 Found net devices under 0000:31:00.0: cvl_0_0 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:03.254 Found net devices under 0000:31:00.1: cvl_0_1 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:03.254 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:03.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:03.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:31:03.514 00:31:03.514 --- 10.0.0.2 ping statistics --- 00:31:03.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.514 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:03.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:03.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:31:03.514 00:31:03.514 --- 10.0.0.1 ping statistics --- 00:31:03.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.514 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.514 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:03.775 ************************************ 00:31:03.775 START TEST nvmf_target_disconnect_tc1 00:31:03.775 ************************************ 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.775 [2024-11-25 13:05:43.548569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.775 [2024-11-25 13:05:43.548630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17d8cf0 with addr=10.0.0.2, port=4420 00:31:03.775 [2024-11-25 13:05:43.548652] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:03.775 [2024-11-25 13:05:43.548662] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:03.775 [2024-11-25 13:05:43.548669] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:03.775 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:03.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:03.775 Initializing NVMe Controllers 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:03.775 00:31:03.775 real 0m0.127s 00:31:03.775 user 0m0.062s 00:31:03.775 sys 0m0.064s 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:03.775 ************************************ 00:31:03.775 END TEST nvmf_target_disconnect_tc1 00:31:03.775 ************************************ 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:03.775 ************************************ 00:31:03.775 START TEST nvmf_target_disconnect_tc2 00:31:03.775 ************************************ 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=836454 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 836454 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 836454 ']' 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:03.775 13:05:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.037 [2024-11-25 13:05:43.700417] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:31:04.037 [2024-11-25 13:05:43.700464] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.037 [2024-11-25 13:05:43.802673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:04.037 [2024-11-25 13:05:43.845509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.037 [2024-11-25 13:05:43.845555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.037 [2024-11-25 13:05:43.845564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.037 [2024-11-25 13:05:43.845571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.037 [2024-11-25 13:05:43.845577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.037 [2024-11-25 13:05:43.847447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:04.037 [2024-11-25 13:05:43.847606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:04.037 [2024-11-25 13:05:43.847762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:04.037 [2024-11-25 13:05:43.847762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:04.981 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:04.981 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:04.981 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:04.981 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:04.981 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.981 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.981 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:04.981 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.981 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.981 Malloc0 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.982 [2024-11-25 13:05:44.617135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.982 [2024-11-25 13:05:44.657554] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=836717 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:04.982 13:05:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:06.900 13:05:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 836454 00:31:06.900 13:05:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Read completed with error (sct=0, sc=8) 00:31:06.900 starting I/O failed 00:31:06.900 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 [2024-11-25 13:05:46.692880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:06.901 [2024-11-25 13:05:46.693384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.901 [2024-11-25 13:05:46.693429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:06.901 qpair failed and we were unable to recover it. 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 [2024-11-25 13:05:46.693626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Write completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 Read completed with error (sct=0, sc=8) 00:31:06.901 starting I/O failed 00:31:06.901 [2024-11-25 13:05:46.693871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:06.901 [2024-11-25 13:05:46.694250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.901 [2024-11-25 13:05:46.694272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.901 qpair failed and we were unable to recover it. 00:31:06.901 [2024-11-25 13:05:46.694521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.901 [2024-11-25 13:05:46.694533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.901 qpair failed and we were unable to recover it. 00:31:06.901 [2024-11-25 13:05:46.694833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.901 [2024-11-25 13:05:46.694844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.901 qpair failed and we were unable to recover it. 00:31:06.901 [2024-11-25 13:05:46.695127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.901 [2024-11-25 13:05:46.695138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.901 qpair failed and we were unable to recover it. 00:31:06.901 [2024-11-25 13:05:46.695407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.901 [2024-11-25 13:05:46.695418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.901 qpair failed and we were unable to recover it. 00:31:06.901 [2024-11-25 13:05:46.695610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.901 [2024-11-25 13:05:46.695621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.901 qpair failed and we were unable to recover it. 00:31:06.901 [2024-11-25 13:05:46.695941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.901 [2024-11-25 13:05:46.695953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.901 qpair failed and we were unable to recover it. 00:31:06.901 [2024-11-25 13:05:46.696265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.901 [2024-11-25 13:05:46.696276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.901 qpair failed and we were unable to recover it. 00:31:06.901 [2024-11-25 13:05:46.696603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.901 [2024-11-25 13:05:46.696613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.901 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.696843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.696854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.697081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.697093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.697406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.697417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.697617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.697629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.697931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.697943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.698284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.698295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.698588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.698602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.698919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.698931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.699254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.699265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.699603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.699614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.699944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.699955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.700286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.700297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.700638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.700649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.700952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.700963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.701138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.701149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.701461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.701472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.701769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.701780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.702081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.702092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.702383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.702394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.702706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.702717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.703038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.703051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.703349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.703360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.703698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.703709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.704085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.704097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.704256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.704267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.704455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.704466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.704786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.704797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.705146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.705159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.705478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.705489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.705818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.705829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.705996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.706009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.706329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.706340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.706715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.706726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.707023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.707035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.707322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.707333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.707642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.707653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.707969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.707982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.902 [2024-11-25 13:05:46.708281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.902 [2024-11-25 13:05:46.708292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.902 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.708632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.708644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.708918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.708929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.709130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.709141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.709438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.709449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.709725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.709737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.710065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.710076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.710408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.710419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.710702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.710713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.711045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.711060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.711356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.711367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.711679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.711691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.711987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.711998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.712287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.712298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.712595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.712606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.712895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.712906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.713186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.713197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.713491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.713501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.713835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.713846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.714069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.714080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.714386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.714396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.714725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.714737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.715024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.715036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.715333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.715345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.715648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.715659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.715985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.715997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.716332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.716343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.716637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.716649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.716919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.716931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.717247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.717260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.717555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.717567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.717891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.717904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.718268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.718280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.718551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.718561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.718841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.718852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.719152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.719165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.719363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.719374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.903 [2024-11-25 13:05:46.719661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.903 [2024-11-25 13:05:46.719671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.903 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.719996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.720008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.720318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.720331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.720624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.720635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.720797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.720809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.721103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.721115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.721417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.721428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.721719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.721730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.721903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.721915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.722235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.722246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.722551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.722563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.722875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.722889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.723186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.723204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.723510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.723523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.723824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.723838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.724164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.724177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.724505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.724518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.724716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.724731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.725038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.725052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.725402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.725416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.725698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.725712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.726017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.726030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.726339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.726352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.726663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.726677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.726991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.727005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.727345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.727358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.727669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.727682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.728026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.728040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.728209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.728222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.728528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.728541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.728881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.728895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.729248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.729262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.729523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.729537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.729868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.729882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.730063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.730077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.730263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.730276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.730579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.730592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.730885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.730898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.904 [2024-11-25 13:05:46.731201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.904 [2024-11-25 13:05:46.731215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.904 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.731546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.731560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.731915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.731929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.732234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.732247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.732565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.732578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.732919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.732934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.733221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.733235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.733607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.733620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.733915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.733928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.734226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.734239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.734521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.734534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.734860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.734878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.735206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.735221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.735536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.735553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.735753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.735775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.736111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.736130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.736433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.736450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.736816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.736833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.737189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.737208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.737511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.737528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.737875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.737894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.738228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.738246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.738532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.738549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.738932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.738950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.739259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.739277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.739583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.739601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.739896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.739915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.740240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.740258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.740550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.740569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.740900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.740918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.741246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.741264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.741564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.741581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.905 [2024-11-25 13:05:46.741906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.905 [2024-11-25 13:05:46.741925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.905 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.742252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.742270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.742598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.742615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.742918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.742937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.743250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.743267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.743594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.743611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.743912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.743930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.744255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.744272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.744589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.744606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.744916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.744935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.745299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.745317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.745609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.745627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.745932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.745950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.746270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.746288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.746493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.746511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.746827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.746846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.747220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.747238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.747552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.747570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.747885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.747908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.748277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.748300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.748669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.748691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.749043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.749066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.749290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.749317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.749520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.749543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.749877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.749901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.750106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.750129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.750445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.750467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.750792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.750815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.751150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.751173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.751526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.751549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.751852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.751879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.752189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.752211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.752498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.752520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.752819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.752841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.753098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.753120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.753420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.753442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.753803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.753825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.754165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.754187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.906 qpair failed and we were unable to recover it. 00:31:06.906 [2024-11-25 13:05:46.754490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.906 [2024-11-25 13:05:46.754511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.754817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.754839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.755158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.755181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.755485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.755506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.755827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.755849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.756102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.756124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.756425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.756449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.756803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.756825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.757180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.757203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.757528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.757550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.757948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.757971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.758300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.758322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.758621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.758642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.758945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.758968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.759300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.759323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.759666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.759688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.760041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.760064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.760402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.760425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.760792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.760813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.761153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.761183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.761415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.761447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.761782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.761812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.762159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.762190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.762566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.762596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.762953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.762990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.763344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.763373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.763713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.763743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.764080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.764111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.764457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.764486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.764828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.764857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.765151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.765182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.765539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.765568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.765903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.765934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.766305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.766335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.766698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.766727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.767060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.767092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.767442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.907 [2024-11-25 13:05:46.767473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.907 qpair failed and we were unable to recover it. 00:31:06.907 [2024-11-25 13:05:46.767829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.767858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.768190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.768221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.768578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.768607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.768962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.768993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.769340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.769370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.769726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.769755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.770090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.770121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.770467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.770496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.770880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.770912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.771169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.771198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.771494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.771523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.771883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.771914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.772264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.772294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.772635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.772665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.773017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.773049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.773409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.773439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.773776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.773805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.774149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.774179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.774542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.774572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.774925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.774955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.775322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.775352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.775667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.775698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.776042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.776073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.776214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.776246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.776546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.776577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.776902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.776933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.777288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.777318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.777674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.777709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.778050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.778081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.778461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.778491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.778805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.778834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.779129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.779160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.779527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.779557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.779912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.779943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.780306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.908 [2024-11-25 13:05:46.780336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.908 qpair failed and we were unable to recover it. 00:31:06.908 [2024-11-25 13:05:46.780685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.780714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.781072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.781104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.781446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.781475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.781828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.781857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.782215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.782245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.782467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.782497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.782873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.782904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.783293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.783323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.783615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.783643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.783981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.784012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.784391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.784421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.784763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.784792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.785198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.785229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.785555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.785584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.785810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.785841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.786212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.786242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.786600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.786630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.786965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.786995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.787353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.787383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.787735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.787765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.788111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.788143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.788498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.788528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.788879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.788910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.789250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.789279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.789638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.789667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.789995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.790028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.790367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.790397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.790751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.790780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.791115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.791146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.791486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.791515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.791880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.791911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.792257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.792287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.792632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.792667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.793026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.793057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.793440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.793469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.793807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.793836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.794118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.794149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.794499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.909 [2024-11-25 13:05:46.794529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.909 qpair failed and we were unable to recover it. 00:31:06.909 [2024-11-25 13:05:46.794920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.910 [2024-11-25 13:05:46.794950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.910 qpair failed and we were unable to recover it. 00:31:06.910 [2024-11-25 13:05:46.795302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.910 [2024-11-25 13:05:46.795332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.910 qpair failed and we were unable to recover it. 00:31:06.910 [2024-11-25 13:05:46.795682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.910 [2024-11-25 13:05:46.795712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.910 qpair failed and we were unable to recover it. 00:31:06.910 [2024-11-25 13:05:46.796049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.910 [2024-11-25 13:05:46.796080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.910 qpair failed and we were unable to recover it. 00:31:06.910 [2024-11-25 13:05:46.796405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.910 [2024-11-25 13:05:46.796434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.910 qpair failed and we were unable to recover it. 00:31:06.910 [2024-11-25 13:05:46.796680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.910 [2024-11-25 13:05:46.796708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.910 qpair failed and we were unable to recover it. 00:31:06.910 [2024-11-25 13:05:46.797079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.910 [2024-11-25 13:05:46.797110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.910 qpair failed and we were unable to recover it. 00:31:06.910 [2024-11-25 13:05:46.797466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.910 [2024-11-25 13:05:46.797496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.910 qpair failed and we were unable to recover it. 00:31:06.910 [2024-11-25 13:05:46.797874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.910 [2024-11-25 13:05:46.797905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:06.910 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.798254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.798285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.798675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.798704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.799061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.799092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.799415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.799443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.799812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.799841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.800172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.800203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.800546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.800576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.800939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.800970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.801330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.801359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.801716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.801746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.802086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.802117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.802472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.802502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.802846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.802887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.803271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.803302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.803658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.803688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.804015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.804047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.804423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.804453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.804807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.804837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.805206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.805237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.805484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.805512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.805882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.805914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.806147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.806176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.806533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.806562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.806882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.806913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.181 [2024-11-25 13:05:46.807259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.181 [2024-11-25 13:05:46.807288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.181 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.807643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.807679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.808004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.808035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.808346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.808376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.808727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.808757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.809113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.809143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.809507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.809536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.809898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.809930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.810269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.810299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.810651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.810681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.810996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.811028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.811388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.811418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.811756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.811785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.812129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.812161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.812481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.812511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.812875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.812907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.813241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.813271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.813629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.813658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.813986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.814017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.814382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.814412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.814768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.814797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.815119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.815150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.815501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.815530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.815891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.815922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.816260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.816290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.816558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.816590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.816821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.816852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.817226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.817257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.817621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.817652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.818012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.818043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.818369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.818398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.818721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.818751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.819104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.819135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.819485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.819514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.819759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.819791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.820166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.820197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.820535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.820565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.820921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.820952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.182 qpair failed and we were unable to recover it. 00:31:07.182 [2024-11-25 13:05:46.821317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.182 [2024-11-25 13:05:46.821346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.821693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.821723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.822086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.822117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.822484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.822521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.822856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.822898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.823272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.823301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.823655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.823685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.824022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.824053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.824426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.824456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.824805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.824834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.825174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.825206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.825561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.825591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.825951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.825981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.826232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.826260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.826559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.826589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.826936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.826967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.827312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.827342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.827577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.827607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.827972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.828004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.828340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.828369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.828730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.828760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.829168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.829199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.829530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.829559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.829912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.829943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.830305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.830334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.830718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.830748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.831081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.831112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.831463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.831493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.831830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.831859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.832209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.832239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.832600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.832631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.832963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.832995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.833360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.833389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.833782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.833811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.834180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.834212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.834561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.834592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.834942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.834972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.835297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.183 [2024-11-25 13:05:46.835327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.183 qpair failed and we were unable to recover it. 00:31:07.183 [2024-11-25 13:05:46.835689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.835718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.836057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.836087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.836334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.836364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.836725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.836755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.837091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.837123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.837455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.837491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.837850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.837889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.838235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.838266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.838599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.838629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.838953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.838985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.839347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.839377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.839609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.839641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.839878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.839910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.840285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.840315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.840675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.840706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.840950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.840982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.841222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.841252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.841617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.841647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.841978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.842010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.842374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.842404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.842750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.842780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.843110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.843141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.843489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.843519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.843856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.843894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.844219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.844249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.844569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.844598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.844932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.844963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.845283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.845312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.845653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.845682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.846025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.846057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.846379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.846409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.846754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.846783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.847127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.847159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.847519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.847549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.847912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.847943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.848307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.848336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.848703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.848733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.184 [2024-11-25 13:05:46.849078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.184 [2024-11-25 13:05:46.849110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.184 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.849449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.849478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.849833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.849872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.850216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.850247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.850598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.850628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.850986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.851017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.851382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.851412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.851756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.851786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.852028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.852068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.852424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.852454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.852835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.852874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.853229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.853259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.853589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.853619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.853957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.853989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.854348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.854378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.855368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.855415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.855785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.855820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.856182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.856216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.856567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.856597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.856951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.856982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.857360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.857390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.857716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.857746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.858102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.858134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.858481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.858512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.858873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.858905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.859227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.859258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.859617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.859646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.859978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.860010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.860351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.860381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.860701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.860733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.861084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.861116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.861529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.861559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.861918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.185 [2024-11-25 13:05:46.861950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.185 qpair failed and we were unable to recover it. 00:31:07.185 [2024-11-25 13:05:46.862326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.862356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.862701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.862731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.863087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.863119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.863496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.863528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.863874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.863906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.864569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.864611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.864885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.864917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.865286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.865317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.865684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.865715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.866075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.866107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.866520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.866550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.866903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.866934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.867281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.867311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.867657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.867686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.868026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.868056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.868379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.868414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.868759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.868789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.869149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.869180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.869535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.869565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.869913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.869944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.870310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.870339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.870694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.870724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.871084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.871115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.871509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.871538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.871769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.871802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.872158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.872190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.872542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.872571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.872921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.872951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.873173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.873204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.873561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.873592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.873896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.873947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.874296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.874327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.874680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.874709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.875042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.875073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.875424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.875453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.875756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.875786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.876153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.876184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.186 [2024-11-25 13:05:46.876518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.186 [2024-11-25 13:05:46.876547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.186 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.876931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.876962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.877345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.877375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.877722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.877751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.878096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.878127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.878488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.878524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.878858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.878897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.879237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.879266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.879620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.879650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.880094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.880124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.880471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.880500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.880856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.880903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.881260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.881290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.881641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.881670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.882007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.882038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.882380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.882410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.882762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.882791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.883160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.883190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.883535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.883566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.883959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.883990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.884349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.884379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.884720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.884750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.885079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.885112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.885463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.885492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.885831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.885870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.886200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.886230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.886622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.886652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.886997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.887029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.887279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.887312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.887667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.887696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.888048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.888079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.888462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.888492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.888854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.888906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.889293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.889323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.889677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.889706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.890069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.890100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.890450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.890479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.890780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.890809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.891170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.187 [2024-11-25 13:05:46.891203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.187 qpair failed and we were unable to recover it. 00:31:07.187 [2024-11-25 13:05:46.891550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.891580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.891962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.891993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.892353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.892383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.892767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.892797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.893004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.893034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.893384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.893415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.893744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.893779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.894102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.894132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.894450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.894481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.894828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.894858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.895203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.895233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.895474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.895507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.895843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.895884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.896212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.896242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.896661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.896690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.897028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.897060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.897375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.897407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.897776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.897806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.898113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.898145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.898489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.898519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.898885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.898917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.899257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.899287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.899516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.899545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.899871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.899901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.900249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.900278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.900636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.900667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.901028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.901061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.901404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.901434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.901781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.901811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.902176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.902207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.902553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.902583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.902935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.902965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.903340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.903371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.903595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.903629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.903858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.903901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.904139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.904172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.904520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.904550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.904912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.188 [2024-11-25 13:05:46.904945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.188 qpair failed and we were unable to recover it. 00:31:07.188 [2024-11-25 13:05:46.905286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.905316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.905636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.905666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.905959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.905992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.906350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.906379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.906731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.906760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.907093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.907125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.907474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.907504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.907777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.907806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.908158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.908195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.908406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.908437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.908772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.908802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.909133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.909165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.909519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.909549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.909902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.909933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.910293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.910322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.910674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.910704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.911036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.911066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.911417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.911447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.911796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.911826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.912078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.912108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.912468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.912497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.912872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.912903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.913237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.913267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.913597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.913627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.914060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.914092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.914422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.914452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.914813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.914843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.915159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.915190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.915535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.915565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.915926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.915956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.916317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.916347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.916689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.916719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.917042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.917074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.917436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.917465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.917706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.917735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.918077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.918110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.918461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.918491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.189 [2024-11-25 13:05:46.918835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.189 [2024-11-25 13:05:46.918885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.189 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.919226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.919256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.919613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.919642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.919892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.919926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.920311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.920341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.920691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.920721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.920966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.920997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.921319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.921348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.921721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.921751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.922101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.922133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.922487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.922517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.922883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.922920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.923249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.923279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.923668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.923697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.923932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.923965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.924329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.924360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.924681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.924711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.925069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.925101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.925453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.925482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.925842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.925883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.926229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.926258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.926487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.926519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.926886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.926918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.927267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.927297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.927658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.927687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.928106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.928138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.928485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.928515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.928858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.928910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.929238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.929268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.929506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.929536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.929881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.929911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.930265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.930295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.190 [2024-11-25 13:05:46.930646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.190 [2024-11-25 13:05:46.930676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.190 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.931017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.931048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.931424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.931454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.931769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.931799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.932124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.932155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.932486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.932515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.932878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.932910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.933156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.933186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.933559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.933589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.933827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.933855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.934197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.934228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.934484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.934512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.934882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.934914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.935261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.935292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.935666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.935694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.935912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.935944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.936289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.936319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.936677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.936706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.937062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.937094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.937332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.937367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.937755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.937785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.938149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.938180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.938516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.938545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.938912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.938944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.939329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.939358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.939709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.939739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.940046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.940077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.940432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.940462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.940808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.940838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.941218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.941249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.941490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.941523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.941729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.941761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.942088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.942120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.942480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.942510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.942853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.942894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.943207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.943237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.943595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.943625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.943971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.944002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.191 [2024-11-25 13:05:46.944346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.191 [2024-11-25 13:05:46.944377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.191 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.944753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.944782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.945164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.945196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.945550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.945581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.945944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.945976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.946329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.946359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.946712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.946742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.947114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.947145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.947483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.947514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.947873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.947904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.948145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.948178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.948590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.948619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.948970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.949003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.949274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.949305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.949696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.949728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.950077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.950108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.950478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.950510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.950851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.950890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.951215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.951244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.951607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.951637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.951974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.952006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.952355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.952390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.952741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.952770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.953122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.953154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.953516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.953546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.953892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.953923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.954110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.954143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.954504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.954534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.954880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.954911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.955249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.955279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.955637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.955669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.956007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.956037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.956405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.956434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.956795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.956825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.957011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.957046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.957387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.957418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.957780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.957810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.957999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.192 [2024-11-25 13:05:46.958034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.192 qpair failed and we were unable to recover it. 00:31:07.192 [2024-11-25 13:05:46.958379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.958410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.958767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.958797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.959152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.959184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.959530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.959560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.959918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.959949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.960315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.960346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.960689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.960720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.960946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.960978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.961337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.961367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.961709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.961739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.962112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.962143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.962497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.962527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.962881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.962913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.963317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.963348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.963702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.963732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.964093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.964124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.964481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.964510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.964844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.964889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.965084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.965114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.965439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.965469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.965786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.965814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.966155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.966186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.966530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.966560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.966916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.966953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.967343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.967372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.967719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.967748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.968076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.968108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.968449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.968479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.968856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.968904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.969241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.969271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.969576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.969605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.969956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.969988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.970353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.970383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.970731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.970761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.971120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.971150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.971504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.971534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.971890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.971922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.972300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.193 [2024-11-25 13:05:46.972329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.193 qpair failed and we were unable to recover it. 00:31:07.193 [2024-11-25 13:05:46.972637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.972665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.973006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.973037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.973280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.973311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.973550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.973580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.973938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.973969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.974196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.974228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.974443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.974471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.974808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.974837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.975183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.975214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.975536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.975565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.975881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.975913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.976285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.976315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.976685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.976717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.977080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.977113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.977472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.977504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.977747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.977777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.978164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.978197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.978556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.978587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.978927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.978958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.979319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.979349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.979702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.979732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.980084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.980114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.980321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.980352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.980693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.980725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.981053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.981085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.981514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.981551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.981900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.981931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.982293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.982322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.982653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.982683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.983040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.983071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.983460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.983489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.983760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.983791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.984140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.984172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.984528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.984558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.984903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.984935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.985314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.985343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.985699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.985729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.986083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.194 [2024-11-25 13:05:46.986115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.194 qpair failed and we were unable to recover it. 00:31:07.194 [2024-11-25 13:05:46.986468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.986499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.986857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.986894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.987244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.987275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.987635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.987666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.988007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.988038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.988395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.988425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.988772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.988802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.989146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.989177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.989511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.989540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.989898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.989928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.990278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.990308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.990658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.990689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.990929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.990964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.991327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.991358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.991654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.991683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.992029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.992061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.992428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.992457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.992813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.992842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.993224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.993257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.993603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.993634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.993993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.994025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.994382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.994411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.994763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.994793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.995115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.995145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.995516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.995546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.995900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.995931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.996287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.996316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.996652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.996688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.997019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.997050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.997433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.997462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.997818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.997848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.998086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.998116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.998490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.998520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.195 [2024-11-25 13:05:46.998882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.195 [2024-11-25 13:05:46.998916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.195 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:46.999265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:46.999296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:46.999633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:46.999663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:46.999991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.000022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.000396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.000426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.000770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.000800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.001138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.001171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.001521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.001550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.001913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.001945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.002320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.002351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.002695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.002725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.003078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.003109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.003444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.003475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.003828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.003858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.004230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.004261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.004626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.004658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.004980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.005012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.005380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.005411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.005750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.005781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.006130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.006162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.006521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.006552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.006779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.006811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.007187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.007218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.007576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.007607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.007961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.007994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.008251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.008284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.008621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.008653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.009009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.009041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.009410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.009441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.009824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.009854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.010214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.010246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.010605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.010635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.010984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.011014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.011359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.011389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.011734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.011769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.012184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.012216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.012561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.012594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.012949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.196 [2024-11-25 13:05:47.012981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.196 qpair failed and we were unable to recover it. 00:31:07.196 [2024-11-25 13:05:47.013349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.013380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.013691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.013721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.014053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.014084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.014430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.014460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.014818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.014849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.015166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.015197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.015556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.015588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.015927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.015959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.016322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.016353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.016711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.016741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.017086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.017117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.017466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.017497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.017831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.017861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.018244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.018274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.018633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.018662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.018999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.019032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.019372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.019403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.019756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.019786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.020136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.020167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.020520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.020550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.020895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.020927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.021309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.021340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.021724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.021755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.022128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.022161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.022432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.022464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.022818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.022849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.023222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.023253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.023592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.023623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.023960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.023992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.024250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.024279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.024645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.024676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.025021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.025054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.025398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.025429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.025797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.025828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.026073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.026105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.026445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.026476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.026718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.026758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.027095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.197 [2024-11-25 13:05:47.027128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.197 qpair failed and we were unable to recover it. 00:31:07.197 [2024-11-25 13:05:47.027364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.027397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.027797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.027828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.028198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.028230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.028577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.028608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.028969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.029001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.029349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.029379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.029731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.029761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.030096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.030128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.030481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.030511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.030879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.030911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.031268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.031298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.031616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.031647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.032054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.032085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.032438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.032469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.032698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.032731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.033087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.033119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.033475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.033505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.033850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.033889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.034233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.034265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.034623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.034653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.034916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.034949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.035311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.035341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.035673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.035703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.036071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.036102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.036403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.036435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.036791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.036822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.037201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.037233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.037591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.037622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.037991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.038023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.038387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.038417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.038763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.038794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.039148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.039179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.039533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.039564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.039783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.039815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.040188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.040219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.040574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.040604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.040976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.041008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.041354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.198 [2024-11-25 13:05:47.041385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.198 qpair failed and we were unable to recover it. 00:31:07.198 [2024-11-25 13:05:47.041723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.041759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.042092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.042126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.042491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.042522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.042883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.042915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.043302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.043332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.043687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.043717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.044050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.044081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.044432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.044462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.044803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.044833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.045204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.045235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.045588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.045618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.045960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.045992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.046336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.046366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.046717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.046749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.047107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.047138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.047475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.047505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.047850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.047889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.048013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.048045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.048318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.048349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.048687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.048717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.048964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.048995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.049353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.049383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.049749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.049780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.050133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.050165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.050508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.050538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.050907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.050938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.051318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.051347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.051699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.051730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.052091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.052124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.052479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.052509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.052873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.052904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.053244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.053275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.053615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.199 [2024-11-25 13:05:47.053645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.199 qpair failed and we were unable to recover it. 00:31:07.199 [2024-11-25 13:05:47.053999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.054030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.054396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.054425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.054759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.054790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.055128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.055159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.055523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.055553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.055694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.055726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.056106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.056138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.056466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.056504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.056860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.056906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.057294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.057324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.057664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.057695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.058052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.058084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.058320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.058350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.058711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.058740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.059100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.059132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.059477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.059508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.059742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.059775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.060125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.060156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.060500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.060530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.060886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.060917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.061296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.061326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.061658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.061689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.062043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.062076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.062432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.062463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.062828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.062857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.063210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.063240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.063584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.063616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.063977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.064007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.064384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.064414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.064781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.064811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.065152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.065184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.065530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.065561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.065915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.065946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.066319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.066349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.066586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.066624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.066994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.067025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.067364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.067394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.067750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.067781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.200 qpair failed and we were unable to recover it. 00:31:07.200 [2024-11-25 13:05:47.068135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.200 [2024-11-25 13:05:47.068167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.068522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.068552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.068925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.068956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.069317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.069347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.069693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.069723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.070131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.070163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.070513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.070544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.070898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.070929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.071287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.071317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.071663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.071693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.071930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.071962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.072340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.072370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.072714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.072745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.072982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.073016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.201 [2024-11-25 13:05:47.073261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.201 [2024-11-25 13:05:47.073294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.201 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.073669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.073703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.074009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.074040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.074285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.074315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.074678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.074708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.075069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.075100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.075445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.075474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.075822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.075852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.076135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.076165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.076528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.076559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.076899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.076931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.077270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.077300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.077680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.077711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.078070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.078101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.078467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.078498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.078834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.078871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.079216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.079247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.079627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.079658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.080013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.080045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.080396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.080426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.080782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.080814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.472 [2024-11-25 13:05:47.081192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.472 [2024-11-25 13:05:47.081223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.472 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.081567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.081603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.081959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.081991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.082321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.082351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.082717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.082748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.082931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.082964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.083203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.083233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.083588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.083618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.083975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.084007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.084360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.084390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.084746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.084776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.085113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.085144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.085514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.085546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.085897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.085928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.086264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.086295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.086669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.086700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.087069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.087100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.087444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.087475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.087821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.087852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.088211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.088242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.088590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.088621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.088974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.089006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.089355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.089384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.089748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.089778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.090107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.090139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.090500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.090531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.090858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.090897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.091275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.091305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.091664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.091695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.092038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.092070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.092427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.092458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.092840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.092886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.093283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.093313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.093654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.093684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.094033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.094067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.094430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.094460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.094820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.094850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.095210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.095245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.473 [2024-11-25 13:05:47.095581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.473 [2024-11-25 13:05:47.095614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.473 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.095963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.095995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.096370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.096400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.096615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.096653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.097007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.097040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.097416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.097445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.097811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.097842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.098228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.098260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.098606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.098637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.098968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.099000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.099360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.099392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.099749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.099780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.100144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.100175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.100562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.100594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.100945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.100977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.101285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.101316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.101654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.101684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.102041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.102075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.102447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.102477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.102824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.102854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.103205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.103237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.103595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.103626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.103980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.104011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.104248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.104278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.104622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.104654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.105009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.105041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.105440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.105469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.105811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.105841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.106198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.106230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.106592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.106623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.106985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.107017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.107246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.107277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.107624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.107656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.108008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.108040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.108382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.108414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.108761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.108792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.110583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.110639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.111018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.111053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.474 [2024-11-25 13:05:47.111470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.474 [2024-11-25 13:05:47.111502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.474 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.111844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.111887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.112232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.112263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.112632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.112663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.113009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.113042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.113392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.113431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.113775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.113806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.114166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.114198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.114560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.114590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.118917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.118986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.119434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.119474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.119847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.119893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.120263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.120297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.120681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.120713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.121050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.121083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.121413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.121446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.121703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.121744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.122132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.122166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.122504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.122539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.122914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.122947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.123309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.123339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.123715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.123749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.124111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.124144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.124505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.124536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.124878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.124912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.125301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.125334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.125697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.125728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.126097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.126130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.126472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.126504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.126904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.126938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.127302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.127336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.127568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.127600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.127986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.128021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.128387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.128420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.128786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.128818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.129161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.129196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.129553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.129584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.129985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.130017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.475 [2024-11-25 13:05:47.130362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.475 [2024-11-25 13:05:47.130395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.475 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.130735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.130767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.131109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.131143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.131497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.131520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.131894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.131919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.132871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.132895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.133238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.133263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.133635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.133663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.134045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.134068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.134391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.134412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.134666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.134689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.135047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.135071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.136872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.136898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.137221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.137244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.137474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.137500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.137876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.137900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.138242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.138264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.138603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.138626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.138976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.139000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.139356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.139380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.139740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.139762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.140065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.140088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.140432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.140456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.140827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.140851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.141225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.141248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.141656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.141674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.142016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.142034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.142379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.142396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.142746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.142763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.143107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.143123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.143461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.143477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.143825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.143842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.144872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.144893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.145213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.145231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.476 [2024-11-25 13:05:47.145460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.476 [2024-11-25 13:05:47.145477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.476 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.145811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.145828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.146185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.146203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.146389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.146405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.146747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.146764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.147113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.147131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.147472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.147488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.147802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.147826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.148139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.148156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.148492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.148510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.148854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.148874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.149210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.149229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.152871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.152898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.153135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.153153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.153514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.153527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.153858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.153878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.154207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.154220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.154579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.154593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.154941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.154954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.155296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.155310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.155633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.155648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.155994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.156008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.156239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.156250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.156652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.156666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.156884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.156898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.157218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.157231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.157582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.157597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.157950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.157963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.158290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.158303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.158655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.158669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.159019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.159033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.159388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.159401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.159773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.159786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.160146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.160167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.160522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.160534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.160892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.477 [2024-11-25 13:05:47.160908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.477 qpair failed and we were unable to recover it. 00:31:07.477 [2024-11-25 13:05:47.161254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.161269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.161514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.161533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.161883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.161910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.162253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.162278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.162619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.162646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.162875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.162898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.163247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.163272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.163590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.163610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.163929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.163944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.164166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.164179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.164521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.164534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.164881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.164896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.165231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.165243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.165591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.165605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.165949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.165962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.166320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.166334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.166636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.166649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.166967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.166985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.167322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.167334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.167649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.167661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.167961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.167974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.168314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.168328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.168668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.168682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.169029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.169043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.169372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.169384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.169708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.169722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.170034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.170048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.170350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.170362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.170713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.170725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.171059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.171072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.171379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.171391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.171750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.171764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.172006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.172019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.172357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.172371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.172705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.172718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.173064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.173077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.173475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.173488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.173813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.478 [2024-11-25 13:05:47.173824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.478 qpair failed and we were unable to recover it. 00:31:07.478 [2024-11-25 13:05:47.174179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.174192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.174455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.174467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.174812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.174824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.175186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.175199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.175538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.175551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.175894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.175908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.176242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.176255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.176600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.176614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.176846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.176859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.177181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.177193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.177541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.177553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.177883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.177896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.178156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.178168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.178460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.178472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.178825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.178838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.179150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.179162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.179491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.179504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.179883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.179896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.180224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.180236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.180417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.180435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.180745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.180759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.180964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.180978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.181314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.181328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.181705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.181719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.182044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.182057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.182369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.182381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.182720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.182733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.183084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.183098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.183421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.183434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.183782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.183796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.184148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.184161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.184482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.184495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.184732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.184745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.185103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.185117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.185484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.185496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.185819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.185831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.186050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.186064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.186367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.479 [2024-11-25 13:05:47.186380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.479 qpair failed and we were unable to recover it. 00:31:07.479 [2024-11-25 13:05:47.186726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.186739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.187093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.187107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.187426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.187439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.187636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.187651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.187980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.187996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.188300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.188315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.188715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.188731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.189055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.189070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.189483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.189499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.189838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.189855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.190198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.190213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.190554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.190570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.190883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.190899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.191227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.191243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.191574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.191589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.191939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.191953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.192183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.192198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.192535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.192552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.192893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.192908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.193249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.193265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.193613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.193628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.193840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.193866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.194195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.194211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.194570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.194587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.194943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.194961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.195162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.195178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.195514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.195529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.195872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.195888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.196206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.196221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.196555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.196570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.196751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.196768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.197084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.197100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.197401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.197417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.197753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.197768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.198095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.198111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.198451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.198471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.198685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.198704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.199125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.199145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.480 qpair failed and we were unable to recover it. 00:31:07.480 [2024-11-25 13:05:47.199367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.480 [2024-11-25 13:05:47.199385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.199740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.199759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.200088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.200107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.200443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.200463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.200668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.200688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.201071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.201092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.201420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.201438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.201780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.201800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.202107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.202129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.202482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.202501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.202818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.202839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.203161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.203181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.203457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.203476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.203786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.203805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.204120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.204141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.204468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.204487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.204821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.204839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.205173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.205195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.205522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.205542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.205890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.205913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.206274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.206296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.206638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.206660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.206980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.207002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.207340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.207366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.207694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.207715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.208047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.208070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.208312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.208333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.208663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.208683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.209028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.209048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.209411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.209431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.209774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.209792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.210096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.210117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.481 qpair failed and we were unable to recover it. 00:31:07.481 [2024-11-25 13:05:47.210449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.481 [2024-11-25 13:05:47.210476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.210838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.210867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.211095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.211119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.211467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.211490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.211852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.211886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.212264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.212289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.212723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.212746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.213108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.213133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.213492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.213516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.213886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.213912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.214251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.214274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.214636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.214660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.215036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.215059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.215417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.215441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.215770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.215795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.216186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.216209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.216571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.216594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.216970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.216995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.217344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.217368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.217733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.217756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.218101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.218125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.218491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.218514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.218876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.218903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.219290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.219314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.219642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.219665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.220031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.220055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.220415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.220439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.220800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.220826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.221167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.221194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.221559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.221584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.221820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.221848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.222115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.222144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.222467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.222491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.222817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.222843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.223191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.223223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.223570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.223601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.223957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.223991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.224365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.224398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.482 qpair failed and we were unable to recover it. 00:31:07.482 [2024-11-25 13:05:47.224750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.482 [2024-11-25 13:05:47.224783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.225194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.225227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.225579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.225610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.225833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.225894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.226241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.226272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.226663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.226697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.227056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.227089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.227454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.227488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.227878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.227911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.228286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.228317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.228563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.228597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.228924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.228958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.229303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.229337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.229693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.229727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.230080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.230114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.230481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.230512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.230856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.230896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.231165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.231198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.231565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.231598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.231934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.231966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.232328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.232363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.232727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.232761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.233092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.233123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.233439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.233470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.233818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.233850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.234207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.234239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.234609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.234640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.234961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.234993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.235317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.235350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.235703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.235735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.236085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.236118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.236471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.236503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.236751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.236788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.237190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.237231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.237584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.237616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.237984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.238016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.238387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.238418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.238771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.238803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.483 [2024-11-25 13:05:47.239168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.483 [2024-11-25 13:05:47.239200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.483 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.239565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.239597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.239939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.239972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.240342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.240372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.240733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.240765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.241114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.241149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.241498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.241529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.241883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.241916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.242303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.242335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.242695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.242727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.243046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.243077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.243409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.243441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.243763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.243793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.244169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.244201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.244554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.244585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.244986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.245021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.245377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.245407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.245752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.245783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.246138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.246171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.246533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.246565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.246928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.246961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.247322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.247353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.247708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.247742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.248107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.248139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.248504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.248537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.248900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.248933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.249314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.249346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.249696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.249729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.250084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.250117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.250478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.250509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.250748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.250778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.251205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.251238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.251598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.251631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.251968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.252003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.252358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.252392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.252745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.252782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.253109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.253142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.253500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.253532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.253892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.484 [2024-11-25 13:05:47.253923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.484 qpair failed and we were unable to recover it. 00:31:07.484 [2024-11-25 13:05:47.254310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.254341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.254699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.254731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.255079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.255111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.255455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.255487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.255841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.255881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.256219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.256250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.256613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.256646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.256997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.257031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.257362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.257394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.257773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.257806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.258183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.258216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.258565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.258596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.258941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.258974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.259206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.259238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.259576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.259607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.259984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.260016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.260366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.260398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.260757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.260787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.261130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.261164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.261517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.261548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.261908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.261941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.262328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.262360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.262697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.262728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.263094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.263131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.263482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.263513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.263827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.263856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.264197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.264228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.264564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.264594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.264941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.264975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.265291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.265324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.265691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.265721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.266094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.266126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.266480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.266512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.266892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.266923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.267277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.267308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.267664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.267694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.268131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.268162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.268545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.485 [2024-11-25 13:05:47.268577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.485 qpair failed and we were unable to recover it. 00:31:07.485 [2024-11-25 13:05:47.268938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.268974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.269304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.269335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.269679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.269710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.270035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.270068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.270401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.270432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.270675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.270706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.271093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.271124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.271486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.271519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.271882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.271914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.272257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.272289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.272645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.272676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.273009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.273043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.273449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.273481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.273845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.273886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.274231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.274262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.274622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.274653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.275016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.275047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.275411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.275442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.275797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.275828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.276167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.276200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.276565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.276597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.276980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.277012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.277372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.277405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.277767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.277799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.278133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.278165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.278500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.278540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.278895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.278928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.279293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.279323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.279706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.279737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.280104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.280137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.280486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.280517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.280745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.280774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.281075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.281108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.281472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.486 [2024-11-25 13:05:47.281503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.486 qpair failed and we were unable to recover it. 00:31:07.486 [2024-11-25 13:05:47.281876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.281909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.282228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.282259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.282625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.282657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.283012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.283043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.283489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.283519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.283889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.283922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.284274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.284306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.284677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.284708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.285045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.285077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.285436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.285466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.285830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.285870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.286220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.286253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.286602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.286634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.286998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.287030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.287403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.287435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.287784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.287815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.288186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.288219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.288592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.288625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.288994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.289029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.289451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.289484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.289829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.289868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.290238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.290270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.290655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.290686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.291038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.291072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.291426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.291457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.291876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.291908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.292286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.292316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.292712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.292743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.293114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.293148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.293506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.293537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.293905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.293938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.294301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.294338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.294673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.294705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.295095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.295127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.295504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.295535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.295902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.295935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.296317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.296348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.487 qpair failed and we were unable to recover it. 00:31:07.487 [2024-11-25 13:05:47.296714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.487 [2024-11-25 13:05:47.296745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.297096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.297131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.297478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.297509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.297750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.297784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.298130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.298163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.298480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.298521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.298882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.298915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.299267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.299299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.299654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.299685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.300013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.300046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.300410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.300441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.300800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.300831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.301206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.301239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.301610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.301642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.301990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.302023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.302401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.302432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.302796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.302828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.303225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.303257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.303695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.303726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.304060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.304095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.304419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.304450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.304812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.304843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.305233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.305265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.305683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.305714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.306076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.306111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.306479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.306509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.306859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.306898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.307249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.307280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.307649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.307679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.308013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.308048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.308422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.308452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.308702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.308735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.309095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.309128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.309491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.309523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.309886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.309927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.310326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.310357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.310716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.310748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.311091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.311124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.488 [2024-11-25 13:05:47.311480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.488 [2024-11-25 13:05:47.311512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.488 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.311876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.311907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.312301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.312332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.312707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.312738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.312969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.313003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.313391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.313423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.313784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.313816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.314189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.314222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.314572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.314603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.314958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.314992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.315370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.315401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.315772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.315802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.316134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.316169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.316479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.316509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.316896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.316927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.317167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.317200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.317630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.317662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.318018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.318051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.318439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.318470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.318833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.318875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.319150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.319182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.319530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.319561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.319945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.319978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.320366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.320398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.320790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.320821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.321214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.321246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.321611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.321643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.322018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.322051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.322417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.322448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.322811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.322843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.323217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.323250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.323478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.323510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.323886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.323920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.324270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.324300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.324677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.324709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.325086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.325118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.325559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.325596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.325946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.325980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.326353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.489 [2024-11-25 13:05:47.326384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.489 qpair failed and we were unable to recover it. 00:31:07.489 [2024-11-25 13:05:47.326754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.326785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.327165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.327199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.327545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.327575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.327944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.327977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.328326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.328356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.328717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.328750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.329174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.329205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.329543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.329574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.329814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.329850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.330236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.330268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.330623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.330655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.331023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.331056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.331443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.331474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.331836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.331878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.332217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.332249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.332620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.332651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.333034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.333068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.333432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.333463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.333848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.333888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.334255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.334285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.334654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.334686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.335038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.335069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.335423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.335455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.335823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.335854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.336212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.336244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.336639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.336671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.337018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.337050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.337427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.337458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.337814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.337845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.338222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.338256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.338608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.338639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.339015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.339050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.339428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.339461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.339817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.490 [2024-11-25 13:05:47.339848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.490 qpair failed and we were unable to recover it. 00:31:07.490 [2024-11-25 13:05:47.340196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.340228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.340595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.340627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.340997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.341031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.341360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.341399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.341751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.341781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.342165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.342197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.342575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.342606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.342961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.342992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.343321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.343352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.343730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.343762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.344100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.344130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.344487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.344518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.344884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.344918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.345316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.345347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.345599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.345628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.346001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.346034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.346271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.346303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.346684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.346717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.347066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.347100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.347456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.347487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.347836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.347876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.348266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.348299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.348678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.348709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.348965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.349000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.349380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.349411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.349649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.349680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.350047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.350082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.350436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.350468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.350838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.350879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.351230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.351263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.351632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.351665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.352064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.352096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.352437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.352469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.352892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.352925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.353328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.353360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.353722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.353753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.354002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.354034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.491 [2024-11-25 13:05:47.354405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.491 [2024-11-25 13:05:47.354436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.491 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.354812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.354842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.355202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.355234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.355585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.355617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.355990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.356022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.356404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.356435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.356793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.356831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.357219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.357252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.357606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.357637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.357977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.358009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.358372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.358404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.358772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.358803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.359179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.359213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.359584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.359615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.359986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.360020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.360439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.360470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.360838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.360890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.361253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.361286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.361627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.361658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.362038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.362070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.362465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.362495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.362878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.362912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.363284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.363315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.363552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.363584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.492 [2024-11-25 13:05:47.363931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.492 [2024-11-25 13:05:47.363963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.492 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.364342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.364376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.364620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.364654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.365010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.365043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.365413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.365444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.365823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.365854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.366219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.366249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.366598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.366630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.367011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.367045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.367427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.367458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.367814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.367844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.368274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.368305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.368673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.368704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.369082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.369115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.369463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.369495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.369839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.369879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.370114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.370148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.370375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.370407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.370831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.370873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.371259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.371292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.371659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.371692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.372088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.372122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.372498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.372536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.372895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.372928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.373310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.373342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.373709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.373740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.374121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.374156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.374504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.374536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.374900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.374932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.375342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.375373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.375733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.375764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.375988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.376020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.376286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.376320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.376689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.376721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.377055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.377087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.377391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.377422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.377800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.377830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.378237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.378269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.378623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.378654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.378981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.379012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.379408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.379439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.379806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.379837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.380208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.380240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.768 [2024-11-25 13:05:47.380596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.768 [2024-11-25 13:05:47.380627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.768 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.380991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.381024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.381275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.381309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.381676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.381707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.382047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.382079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.382313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.382348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.382692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.382726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.383155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.383188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.383528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.383561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.383932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.383965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.384336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.384368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.384701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.384734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.385099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.385132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.385377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.385407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.385801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.385833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.386208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.386241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.386614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.386646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.387013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.387045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.387416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.387447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.387710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.387748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.388116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.388149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.388509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.388541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.388916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.388948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.389322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.389353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.389715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.389746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.390092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.390124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.390493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.390524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.390888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.390920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.391278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.391310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.391538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.391571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.391904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.391938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.392332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.392364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.392701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.392734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.393096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.393130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.393530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.393561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.393957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.394012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.394396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.394429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.394806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.394837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.395211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.395242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.395599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.395632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.395986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.396018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.396372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.396404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.396775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.396807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.397196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.397230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.397588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.397619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.397999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.398030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.398442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.398474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.398824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.398854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.399230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.399262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.399628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.399660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.399999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.400033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.400409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.400440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.400834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.400878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.401238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.401270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.401636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.401668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.402013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.402046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.402412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.402443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.402810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.402842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.403101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.403133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.403496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.403534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.403887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.769 [2024-11-25 13:05:47.403921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.769 qpair failed and we were unable to recover it. 00:31:07.769 [2024-11-25 13:05:47.404159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.404192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.404520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.404552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.404911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.404943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.405364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.405395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.405759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.405790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.406133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.406166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.406512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.406544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.406905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.406936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.407261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.407292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.407661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.407692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.408048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.408081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.408449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.408480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.408850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.408897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.409254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.409286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.409642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.409673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.410037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.410070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.410421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.410452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.410823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.410855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.411253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.411285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.411636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.411668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.412007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.412039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.412439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.412470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.412832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.412877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.413265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.413297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.413547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.413577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.413991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.414024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.414405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.414436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.414807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.414838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.415211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.415245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.415580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.415613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.415987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.416018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.416369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.416402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.416759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.416791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.417139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.417172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.417517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.417549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.417909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.417941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.418303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.418335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.418721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.418753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.419119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.419157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.419483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.419514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.419874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.419907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.420316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.420347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.420590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.420623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.420978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.421011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.421377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.421408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.421782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.421811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.422186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.422218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.422573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.422604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.770 [2024-11-25 13:05:47.422991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.770 [2024-11-25 13:05:47.423026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.770 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.423357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.423388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.423755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.423788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.424157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.424190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.424568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.424600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.424974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.425007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.425361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.425393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.425755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.425787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.426171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.426203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.426572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.426604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.426944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.426976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.427357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.427388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.427716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.427748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.428100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.428132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.428473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.428505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.428871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.428903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.429241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.429273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.429639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.429671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.430027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.430060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.430413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.430444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.430813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.430843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.431233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.431266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.431624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.431657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.432013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.432047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.432421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.432452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.432794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.432825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.433209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.433241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.433609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.433640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.434008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.434041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.434418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.434449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.434809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.434846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.435241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.435273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.435639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.435670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.435928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.435959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.436338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.436370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.436725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.436756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.437111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.437145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.437511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.437542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.437904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.437936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.438336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.438368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.438733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.438765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.439103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.439136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.439491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.439523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.439883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.439918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.440327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.771 [2024-11-25 13:05:47.440359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.771 qpair failed and we were unable to recover it. 00:31:07.771 [2024-11-25 13:05:47.440688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.440720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.441099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.441132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.441537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.441568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.441934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.441967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.442336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.442366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.442720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.442751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.443105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.443136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.443499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.443530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.443904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.443936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.444320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.444352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.444711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.444743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.445095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.445127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.445458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.445492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.445844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.445895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.446142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.446173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.446574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.446607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.446980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.447014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.447369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.447399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.447760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.447791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.448161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.448194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.448564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.448596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.448943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.448976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.449348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.449381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.449747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.449779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.450030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.450066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.450452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.450491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.450882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.450917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.451299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.451330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.451639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.451670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.452045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.452079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.452463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.452496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.452873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.452906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.453287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.453320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.453672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.453704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.454041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.454073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.454440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.454472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.454841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.454880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.455232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.455263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.455610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.455642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.455985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.456017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.456387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.456422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.456817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.456849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.457075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.457110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.457481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.457514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.457759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.457793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.458145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.458179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.458545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.458578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.458958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.458993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.459369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.459400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.459760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.459792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.460146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.460179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.460511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.772 [2024-11-25 13:05:47.460540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.772 qpair failed and we were unable to recover it. 00:31:07.772 [2024-11-25 13:05:47.460913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.460947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.461322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.461355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.461787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.461819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.462225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.462260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.462623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.462656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.463033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.463066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.463492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.463525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.463834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.463872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.464012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.464045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.464457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.464492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.464852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.464894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.465364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.465397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.465766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.465797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.466171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.466205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.466587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.466620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.466896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.466930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.467304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.467337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.467694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.467727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.468099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.468131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.468495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.468529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.468763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.468800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.469171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.469204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.469561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.469594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.469958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.469992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.470229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.470261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.470624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.470659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.471003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.471038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.471420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.471456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.471830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.471869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.472260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.472292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.472651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.472685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.473021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.473053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.473432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.473462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.473807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.473840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.474254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.474287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.474520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.474551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.474943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.474977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.475322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.475354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.475715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.475747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.476107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.476141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.476509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.476550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.476794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.476826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.477207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.477241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.477610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.477642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.478013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.478048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.478424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.478458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.478826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.478859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.479248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.479282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.479587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.479618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.479975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.480007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.480368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.480402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.773 [2024-11-25 13:05:47.480765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.773 [2024-11-25 13:05:47.480796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.773 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.481181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.481215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.481565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.481597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.482142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.482175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.482541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.482572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.482992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.483025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.483248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.483282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.483542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.483575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.483955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.483989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.484361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.484393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.484748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.484781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.485081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.485115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.485492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.485524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.485899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.485933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.486295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.486326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.486698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.486731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.487098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.487134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.487501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.487533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.487886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.487920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.488282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.488316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.488691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.488724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.489089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.489122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.489476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.489509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.489905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.489938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.490311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.490342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.490713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.490744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.491122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.491157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.491382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.491417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.491800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.491832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.492118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.492158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.492496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.492530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.492887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.492920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.493295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.493326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.493698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.493729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.494094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.494130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.494484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.494516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.494883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.494915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.495288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.495319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.495644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.495678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.496033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.496064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.496450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.496483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.496856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.496899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.497250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.497282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.497637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.497669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.498045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.498080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.498443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.498475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.498845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.774 [2024-11-25 13:05:47.498885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.774 qpair failed and we were unable to recover it. 00:31:07.774 [2024-11-25 13:05:47.499275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.499306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.499663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.499696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.500070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.500104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.500489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.500522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.500881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.500915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.501181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.501212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.501605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.501640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.502002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.502037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.502438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.502469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.502845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.502886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.503298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.503331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.503687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.503716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.504038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.504073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.504447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.504480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.504854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.504895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.505288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.505320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.505659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.505689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.506084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.506118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.506485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.506518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.506918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.506952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.507325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.507358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.507725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.507757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.508115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.508156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.508507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.508539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.508894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.508928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.509304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.509334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.509684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.509717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.510093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.510127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.510480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.510514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.510748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.510779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.511176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.511209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.511566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.511598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.511952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.511984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.512323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.512358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.512727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.512761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.513159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.513193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.513552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.513587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.513952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.513985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.514361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.514392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.514798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.514832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.515081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.515113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.515477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.515508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.515896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.515931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.516282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.516315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.516669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.516701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.517070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.517105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.517475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.517507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.517692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.517726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.518105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.518136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.518512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.518545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.518942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.518975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.519197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.519229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.519582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.775 [2024-11-25 13:05:47.519615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.775 qpair failed and we were unable to recover it. 00:31:07.775 [2024-11-25 13:05:47.519984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.520017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.520382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.520413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.520769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.520801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.521183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.521216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.521587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.521620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.521993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.522026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.522377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.522409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.522764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.522795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.523169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.523203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.523573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.523612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.523963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.523995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.524237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.524269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.524649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.524680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.525045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.525079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.525432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.525465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.525817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.525846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.526218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.526251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.526637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.526670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.527020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.527054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.527424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.527459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.527834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.527879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.528284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.528316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.528671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.528700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.528952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.528987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.529361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.529393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.529761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.529792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.530126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.530159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.530395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.530427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.530769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.530800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.531147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.531180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.531541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.531573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.531924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.531956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.532216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.532246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.532574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.532605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.532961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.532993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.533232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.533268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.533612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.533647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.534008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.534042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.534398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.534430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.534792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.534826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.535291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.535325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.535692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.535724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.536094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.536130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.536520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.536552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.536934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.536966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.537379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.537411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.537765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.537798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.538137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.538171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.538534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.538566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.776 [2024-11-25 13:05:47.538937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.776 [2024-11-25 13:05:47.538976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.776 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.539343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.539374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.539737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.539770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.540108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.540141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.540515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.540550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.540912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.540946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.541374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.541407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.541745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.541778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.542114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.542147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.542502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.542533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.542896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.542928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.543306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.543339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.543708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.543738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.544126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.544160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.544518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.544549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.544930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.544963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.545317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.545351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.545708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.545737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.546112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.546146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.546489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.546523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.546765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.546797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.547161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.547199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.547601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.547638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.547882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.547916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.548310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.548344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.548730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.548762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.549139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.549171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.549542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.549575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.549956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.549989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.550362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.550406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.550738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.550772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.551164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.551197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.551563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.551595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.551951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.551985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.552344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.552374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.552750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.552784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.553116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.553150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.553505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.553538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.553885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.553920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.554274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.554305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.554675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.554711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.555148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.555182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.555533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.555566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.555935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.555969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.556349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.556380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.556738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.556770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.557128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.557161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.557523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.557556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.557799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.557835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.558143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.558176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.558545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.558577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.558948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.777 [2024-11-25 13:05:47.558982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.777 qpair failed and we were unable to recover it. 00:31:07.777 [2024-11-25 13:05:47.559345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.559378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.559739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.559772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.560132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.560167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.560493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.560523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.560927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.560961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.561325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.561359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.561711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.561744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.562095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.562129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.562375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.562410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.562774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.562807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.563158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.563193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.563570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.563604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.563976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.564011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.564363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.564398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.564742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.564774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.565113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.565146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.565516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.565548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.565907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.565940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.566317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.566347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.566592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.566626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.566960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.566992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.567367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.567399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.567757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.567788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.568128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.568160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.568535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.568567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.568926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.568960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.569328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.569358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.569738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.569771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.570153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.570192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.570420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.570455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.570681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.570714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.571050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.571083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.571447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.571479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.571834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.571879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.572267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.572300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.572554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.572583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.572990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.573024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.573366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.573398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.573635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.573664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.574041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.574074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.574461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.574493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.574849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.574887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.575277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.575309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.575680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.575712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.576083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.576116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.576479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.576511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.576948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.576981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.778 [2024-11-25 13:05:47.577370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.778 [2024-11-25 13:05:47.577401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.778 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.577768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.577799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.578122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.578153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.578522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.578553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.578936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.578968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.579344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.579376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.579730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.579761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.580137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.580169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.580539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.580572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.580930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.580964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.581345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.581376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.581715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.581747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.582094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.582128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.582380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.582409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.582715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.582746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.583126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.583159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.583540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.583572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.583935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.583968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.584332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.584366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.584617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.584648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.585025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.585058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.585438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.585477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.585713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.585747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.586113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.586146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.586542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.586573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.586940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.586974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.587338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.587370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.587727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.587758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.588151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.588186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.588551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.588583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.588944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.588978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.589349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.589380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.589751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.589782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.590120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.590153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.590505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.590537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.590928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.590961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.591359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.591391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.591771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.591802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.592155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.592187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.592539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.592572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.592939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.592972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.593341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.593373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.593732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.593767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.594122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.594155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.594481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.594513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.594887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.594920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.595371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.595403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.595758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.595789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.596037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.596072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.596437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.596468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.596819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.596851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.597247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.597280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.779 [2024-11-25 13:05:47.597651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.779 [2024-11-25 13:05:47.597683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.779 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.598009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.598042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.598394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.598426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.598786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.598818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.599192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.599224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.599576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.599607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.599984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.600018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.600388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.600421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.600751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.600784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.601155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.601194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.601541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.601574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.601933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.601984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.602388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.602419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.602824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.602856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.603216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.603248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.603604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.603635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.604005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.604038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.604415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.604449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.604780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.604813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.605193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.605226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.605573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.605605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.605995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.606029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.606388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.606419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.606784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.606817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.607192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.607224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.607597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.607629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.607986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.608018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.608381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.608413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.608659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.608690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.609076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.609109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.609459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.609491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.609886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.609921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.610294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.610327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.610582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.610611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.610951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.610983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.611356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.611388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.611766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.611799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.612142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.612175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.612535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.612567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.612922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.612953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.613332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.613363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.613710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.613742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.614118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.614151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.614519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.614551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.614912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.614945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.615308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.615340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.615706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.615737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.616107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.616141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.616519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.616550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.616789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.616825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.617191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.617223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.780 [2024-11-25 13:05:47.617582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.780 [2024-11-25 13:05:47.617612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.780 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.617860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.617904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.618324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.618355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.618597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.618627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.618987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.619020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.619425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.619456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.619877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.619910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.620266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.620298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.620652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.620684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.620940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.620973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.621344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.621377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.621728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.621760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.622122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.622155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.622559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.622591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.622922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.622955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.623339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.623371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.623723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.623754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.624098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.624130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.624369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.624403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.624757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.624789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.625149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.625183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.625509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.625540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.625961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.625993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.626347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.626379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.626743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.626776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.627076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.627110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.627513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.627545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.627781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.627811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.628195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.628228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.628615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.628646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.628977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.629008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.629379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.629412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.629662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.629692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.630049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.630081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.630469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.630500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.630715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.630749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.631138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.631170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.631537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.631568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.631932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.631972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.632351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.632382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.632726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.632758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.633109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.633142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.633525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.633557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.633912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.633946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.634325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.634356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.781 [2024-11-25 13:05:47.634728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.781 [2024-11-25 13:05:47.634758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.781 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.635146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.635179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.635531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.635562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.635920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.635954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.636318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.636350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.636729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.636760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.637196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.637229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.637619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.637652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.638033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.638065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.638306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.638339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.638688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.638719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.639090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.639124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.639492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.639523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.639886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.639920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.640278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.640310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.640541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.640575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.640942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.640975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.641367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.641398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.641832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.641870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.642232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.642264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.642624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.642657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.642999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.643033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.643399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.643432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.643783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.643815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.644211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.644244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.644617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.644650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.645018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.645051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.645407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.645439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.645807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.645839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.646249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.646281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.646638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.646669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.647027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.647059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.647452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.647484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.647856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.647914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.648262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.648294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.648650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.648681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.649050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.649082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.649461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.649492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.649836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.649874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.650245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.650277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.650654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.650685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.651038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.651072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.651441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.651472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.651825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.651856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.652241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.652273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.652646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.652678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.653025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.653057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.653427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.653460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.653710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.653745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.654095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.654128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.654481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.654512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.782 [2024-11-25 13:05:47.654763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.782 [2024-11-25 13:05:47.654792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.782 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.655165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.655197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.655531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.655563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.655922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.655954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.656322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.656353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.656725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.656758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.657115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.657148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.657495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.657527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.657908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.657941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.658337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.658370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.658703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.658734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.659107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.659140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.659506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.659537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:07.783 [2024-11-25 13:05:47.659911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.783 [2024-11-25 13:05:47.659943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:07.783 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.660351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.660386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.660738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.660769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.661133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.661165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.661535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.661568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.661979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.662010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.662380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.662412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.662751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.662784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.663166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.663199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.663567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.663599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.663967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.664000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.664368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.664399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.664784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.664815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.665188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.665220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.665579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.665611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.665970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.666003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.666366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.666398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.666771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.666803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.667151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.054 [2024-11-25 13:05:47.667183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.054 qpair failed and we were unable to recover it. 00:31:08.054 [2024-11-25 13:05:47.667542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.667574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.667755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.667784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.668153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.668185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.668542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.668574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.668930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.668963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.669347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.669379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.669742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.669774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.670146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.670180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.670540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.670572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.670899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.670931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.671297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.671329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.671688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.671719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.672105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.672140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.672482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.672514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.672889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.672924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.673251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.673284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.673630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.673664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.674006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.674048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.674387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.674420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.674749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.674781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.675170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.675204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.675532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.675564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.675970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.676004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.676388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.676420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.676779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.676811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.677188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.677221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.677461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.677494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.677847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.677887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.678254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.678289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.678629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.678661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.679016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.679052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.679502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.679536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.679768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.679798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.680173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.680207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.680572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.680603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.680960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.680993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.681350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.681382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.055 [2024-11-25 13:05:47.681746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.055 [2024-11-25 13:05:47.681777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.055 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.682125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.682158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.682517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.682550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.682952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.682984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.683350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.683382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.683749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.683780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.684005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.684039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.684418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.684450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.684816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.684848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.685211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.685244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.685592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.685623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.685986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.686017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.686382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.686415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.686758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.686791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.687152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.687184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.687534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.687565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.687937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.687969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.688332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.688364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.688721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.688752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.689170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.689202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.689575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.689613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.689986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.690018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.690380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.690411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.690761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.690793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.691165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.691199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.691435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.691468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.691825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.691856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.692225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.692258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.692635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.692667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.693039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.693071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.693442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.693473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.693682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.693710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.694083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.694117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.694486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.694518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.694885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.694920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.695274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.695307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.695668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.695702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.696077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.056 [2024-11-25 13:05:47.696111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.056 qpair failed and we were unable to recover it. 00:31:08.056 [2024-11-25 13:05:47.696488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.696522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.699105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.699186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.699652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.699692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.700077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.700113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.700470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.700502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.700730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.700763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.701163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.701198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.701590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.701624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.701901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.701936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.702326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.702359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.702715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.702747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.703035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.703070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.703327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.703364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.703612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.703649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.704029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.704067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.704427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.704461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.704798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.704831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.705286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.705319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.705736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.705769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.706141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.706175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.706561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.706596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.706949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.706982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.707363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.707404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.707778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.707810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.708239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.708274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.708631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.708665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.709033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.709068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.709460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.709493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.709844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.709884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.710293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.710326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.712067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.712130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.712527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.712562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.712951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.712987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.713367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.713399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.713734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.713765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.714141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.714174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.714541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.714573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.714931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.057 [2024-11-25 13:05:47.714964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.057 qpair failed and we were unable to recover it. 00:31:08.057 [2024-11-25 13:05:47.715314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.715345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.715717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.715749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.717465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.717524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.717922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.717958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.718319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.718353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.718730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.718765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.719147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.719181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.719531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.719568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.719945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.719979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.720365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.720396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.720759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.720790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.722472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.722538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.722986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.723027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.723417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.723449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.723804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.723834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.724155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.724195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.724528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.724561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.724945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.724980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.725348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.725380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.725782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.725816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.726186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.726219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.726590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.726625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.726981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.727016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.727384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.727418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.727775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.727819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.728275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.728314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.728672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.728705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.729057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.729091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.730925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.730985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.731401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.731434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.731777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.731809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.732210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.732243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.732581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.732616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.732962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.732997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.733245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.733281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.733564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.733604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.733980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.734014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.058 [2024-11-25 13:05:47.735785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.058 [2024-11-25 13:05:47.735849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.058 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.736275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.736313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.736691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.736723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.737089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.737124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.737502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.737534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.737888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.737921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.738283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.738315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.738700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.738733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.739096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.739128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.739492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.739523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.739928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.739964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.740298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.740330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.740670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.740701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.741047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.741084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.741473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.741508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.741878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.741912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.742270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.742302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.744095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.744156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.744547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.744584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.744938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.744976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.745227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.745260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.745637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.745671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.746018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.746052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.746317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.746347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.746715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.746749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.747073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.747105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.748742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.748806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.749301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.749350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.749735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.749767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.751505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.751564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.751972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.752010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.752404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.059 [2024-11-25 13:05:47.752437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.059 qpair failed and we were unable to recover it. 00:31:08.059 [2024-11-25 13:05:47.752806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.752838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.753239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.753270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.753629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.753661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.753979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.754012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.754389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.754421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.754786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.754818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.755201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.755234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.755598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.755630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.756040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.756074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.756465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.756500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.756855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.756899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.757242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.757273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.757640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.757673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.758022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.758055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.758413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.758444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.758815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.758845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.759110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.759146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.759506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.759540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.759938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.759976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.760334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.760368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.760733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.760765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.761181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.761215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.761632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.761665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.761913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.761950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.762327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.762360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.762713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.762747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.763119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.763154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.763523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.763554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.763887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.763924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.764280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.764312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.764547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.764577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.764985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.765019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.765275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.765307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.765668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.765700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.765984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.766017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.766380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.766419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.766787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.766817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.060 [2024-11-25 13:05:47.767179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.060 [2024-11-25 13:05:47.767211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.060 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.767568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.767600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.767964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.767996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.768372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.768404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.768799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.768832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.769218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.769252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.769618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.769651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.769977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.770010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.770373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.770404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.770789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.770822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.771052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.771085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.771464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.771496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.771842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.771882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.772235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.772267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.772635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.772667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.773045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.773080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.773445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.773477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.773841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.773891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.774272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.774304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.774719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.774752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.775183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.775216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.775451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.775481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.777070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.777135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.777534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.777568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.777963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.777997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.778395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.778427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.778802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.778833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.779193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.779227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.779584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.779618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.779957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.779992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.780388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.780420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.780786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.780820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.781210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.781245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.781614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.781648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.782025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.782059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.782444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.782476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.782677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.782706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.784386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.784445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.061 [2024-11-25 13:05:47.784840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.061 [2024-11-25 13:05:47.784897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.061 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.785293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.785324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.785683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.785715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.786062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.786097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.786444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.786477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.786807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.786842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.787139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.787180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.787566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.787598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.787964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.787998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.789549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.789608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.789899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.789934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.790254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.790287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.790673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.790704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.791074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.791108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.791498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.791531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.791882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.791916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.792274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.792305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.792722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.792757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.793106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.793140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.793498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.793530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.793914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.793947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.794349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.794381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.794739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.794771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.795107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.795141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.795513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.795546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.795906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.795939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.796345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.796377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.796811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.796846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.797225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.797257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.797589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.797623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.797998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.798030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.798397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.798427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.798814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.798848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.799242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.799273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.799647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.799682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.800040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.800073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.800452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.800487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.800856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.800906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.062 [2024-11-25 13:05:47.801359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.062 [2024-11-25 13:05:47.801393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.062 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.801748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.801779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.802153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.802196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.802601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.802634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.802974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.803008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.803316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.803348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.803644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.803676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.804033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.804065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.804428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.804461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.804899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.804932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.805337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.805369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.805707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.805738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.806111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.806145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.806554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.806584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.806964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.806996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.807398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.807431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.807792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.807823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.808095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.808127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.808495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.808527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.808878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.808912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.809348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.809379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.809739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.809769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.810153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.810187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.810457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.810487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.810842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.810880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.811253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.811286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.811607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.811637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.811994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.812028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.812399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.812430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.812792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.812824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.813202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.813235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.813594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.813624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.814093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.814126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.063 [2024-11-25 13:05:47.814494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.063 [2024-11-25 13:05:47.814525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.063 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.814884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.814915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.815244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.815275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.815646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.815678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.816050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.816082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.816442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.816473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.816851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.816891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.817256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.817286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.817663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.817694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.817959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.818002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.818231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.818264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.818503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.818533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.818908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.818942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.819296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.819327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.819672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.819703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.819982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.820014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.820375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.820406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.820801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.820833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.821205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.821238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.821615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.821646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.821905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.821937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.822329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.822360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.822604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.822634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.822906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.822942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.823320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.823350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.823715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.823746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.824110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.824143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.824507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.824539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.824900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.824933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.825335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.825366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.825741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.825772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.826140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.826173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.826537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.826569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.826936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.826968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.827352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.827384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.827752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.827784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.828021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.828056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.828351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.828382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.064 [2024-11-25 13:05:47.828759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.064 [2024-11-25 13:05:47.828790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.064 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.829176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.829210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.829573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.829604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.829961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.829995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.830416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.830448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.830793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.830824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.831289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.831321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.831694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.831727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.831993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.832026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.832399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.832429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.832779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.832812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.833160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.833199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.833532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.833563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.833818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.833849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.834297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.834329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.834669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.834702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.835068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.835101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.835498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.835529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.835918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.835951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.836204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.836235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.836525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.836556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.836924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.836957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.837342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.837375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.837751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.837782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.838129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.838163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.838517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.838551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.838944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.838977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.839323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.839354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.839726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.839758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.840099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.840132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.840358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.840391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.840766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.840796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.841150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.841183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.841428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.841463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.841821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.841852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.842282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.842315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.842661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.842692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.843051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.843082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.065 [2024-11-25 13:05:47.843515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.065 [2024-11-25 13:05:47.843547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.065 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.843916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.843950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.844257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.844288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.844674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.844705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.844998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.845030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.845426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.845457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.845788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.845819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.846113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.846146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.846513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.846545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.846828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.846870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.847252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.847283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.847645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.847678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.847947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.847981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.848361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.848399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.848759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.848791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.849197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.849230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.849595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.849627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.849997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.850031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.850412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.850443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.850802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.850833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.851181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.851215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.851562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.851593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.851978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.852011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.852459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.852490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.852856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.852896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.853267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.853299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.853645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.853678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.854063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.854095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.854360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.854390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.854769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.854801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.855175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.855207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.855566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.855598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.855947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.855981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.856245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.856275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.856629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.856660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.856937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.856969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.857360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.857391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.857755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.857787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.066 qpair failed and we were unable to recover it. 00:31:08.066 [2024-11-25 13:05:47.858170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.066 [2024-11-25 13:05:47.858202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.858570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.858603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.858989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.859022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.859416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.859447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.859878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.859910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.860279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.860311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.860672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.860706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.861065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.861097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.861387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.861416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.861811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.861843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.862236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.862268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.862528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.862558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.862950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.862982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.863345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.863379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.863781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.863813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.864170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.864215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.864454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.864484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.864840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.864890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.865215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.865246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.865621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.865652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.866000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.866035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.866401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.866432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.866799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.866830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.867144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.867175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.867539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.867570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.867944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.867977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.868349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.868380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.868757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.868789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.869150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.869184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.869543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.869574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.869939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.869972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.870352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.870386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.870756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.870788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.871207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.871240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.871610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.871641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.872011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.872044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.872427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.872459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.067 [2024-11-25 13:05:47.872847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.067 [2024-11-25 13:05:47.872895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.067 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.873190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.873222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.873582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.873614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.873778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.873807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.874188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.874221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.874581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.874613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.874996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.875029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.875411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.875443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.875813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.875844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.876250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.876282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.876648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.876678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.876939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.876970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.877336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.877368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.877735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.877766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.878185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.878217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.878592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.878623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.878997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.879030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.879405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.879436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.879815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.879847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.880194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.880225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.880572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.880604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.880946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.880978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.881342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.881373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.881634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.881664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.882028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.882061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.882422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.882454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.882740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.882774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.883170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.883201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.883553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.883585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.883833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.883872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.884331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.884362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.884734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.884765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.885132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.885165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.068 [2024-11-25 13:05:47.885544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.068 [2024-11-25 13:05:47.885576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.068 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.885904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.885938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.886322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.886352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.886701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.886733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.887079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.887112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.887451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.887485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.887845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.887886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.888285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.888318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.888665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.888697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.889084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.889118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.889403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.889434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.889769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.889802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.890168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.890209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.890564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.890596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.890755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.890792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.891235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.891269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.891604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.891636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.892038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.892070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.892309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.892340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.892700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.892732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.892968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.893001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.893323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.893355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.893721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.893754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.894104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.894137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.894490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.894523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.894892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.894924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.895332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.895364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.895726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.895758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.896112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.896145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.896411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.896440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.896764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.896797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.897173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.897206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.897566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.897598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.897914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.897947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.898307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.898338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.898629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.898661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.898951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.898984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.899354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.899385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.899768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.069 [2024-11-25 13:05:47.899801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.069 qpair failed and we were unable to recover it. 00:31:08.069 [2024-11-25 13:05:47.899990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.900023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.900421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.900452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.900789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.900823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.901190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.901222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.901593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.901625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.901898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.901930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.902353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.902383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.902753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.902785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.903102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.903135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.903367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.903398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.903776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.903807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.904212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.904245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.904672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.904704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.904961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.905002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.905407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.905439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.905816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.905848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.906239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.906271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.906673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.906704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.907088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.907120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.907480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.907513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.907788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.907819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.908272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.908304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.908576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.908606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.909000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.909033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.909386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.909418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.909786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.909818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.910241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.910274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.910644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.910675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.910929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.910964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.911363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.911395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.911764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.911796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.912171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.912203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.912565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.912597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.912994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.913026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.913302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.913332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.913577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.913607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.914005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.914040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.914414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.070 [2024-11-25 13:05:47.914447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.070 qpair failed and we were unable to recover it. 00:31:08.070 [2024-11-25 13:05:47.914721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.914752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.915134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.915167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.915415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.915445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.915815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.915846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.916353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.916386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.916757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.916789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.917172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.917206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.917560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.917593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.917971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.918003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.918368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.918400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.918756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.918789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.918960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.918994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.919421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.919452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.919793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.919825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.920222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.920255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.920611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.920648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.920996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.921028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.921420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.921451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.921820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.921851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.922252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.922285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.922685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.922717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.923024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.923056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.923418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.923450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.923812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.923844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.924102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.924137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.924579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.924611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.924997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.925031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.925283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.925316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.925687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.925718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.925973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.926006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.926396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.926427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.926730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.926763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.927190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.927223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.927603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.927634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.927987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.928020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.928424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.928455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.928860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.928903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.929285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.071 [2024-11-25 13:05:47.929316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.071 qpair failed and we were unable to recover it. 00:31:08.071 [2024-11-25 13:05:47.929680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.929712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.930070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.930102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.930485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.930516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.930906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.930939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.931316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.931352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.931600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.931630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.932020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.932053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.932427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.932459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.932748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.932779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.933155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.933188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.933583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.933615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.933956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.933988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.934181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.934214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.934586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.934618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.934968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.935001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.935382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.935412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.935759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.935792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.936160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.936199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.936547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.936580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.936915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.936947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.937334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.937365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.937720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.937752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.938007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.938042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.938426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.938457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.938722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.938752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.939175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.939209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.939566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.939599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.939950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.939983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.940341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.940372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.940740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.940772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.940952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.940983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.941368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.941401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.941764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.941797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.072 [2024-11-25 13:05:47.942097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.072 [2024-11-25 13:05:47.942130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.072 qpair failed and we were unable to recover it. 00:31:08.073 [2024-11-25 13:05:47.942519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.073 [2024-11-25 13:05:47.942550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.073 qpair failed and we were unable to recover it. 00:31:08.073 [2024-11-25 13:05:47.942848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.073 [2024-11-25 13:05:47.942889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.073 qpair failed and we were unable to recover it. 00:31:08.073 [2024-11-25 13:05:47.943278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.073 [2024-11-25 13:05:47.943309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.073 qpair failed and we were unable to recover it. 00:31:08.073 [2024-11-25 13:05:47.943660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.073 [2024-11-25 13:05:47.943692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.073 qpair failed and we were unable to recover it. 00:31:08.073 [2024-11-25 13:05:47.944027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.073 [2024-11-25 13:05:47.944058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.073 qpair failed and we were unable to recover it. 00:31:08.073 [2024-11-25 13:05:47.944305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.073 [2024-11-25 13:05:47.944339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.073 qpair failed and we were unable to recover it. 00:31:08.073 [2024-11-25 13:05:47.944686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.073 [2024-11-25 13:05:47.944720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.073 qpair failed and we were unable to recover it. 00:31:08.073 [2024-11-25 13:05:47.944964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.073 [2024-11-25 13:05:47.944997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.073 qpair failed and we were unable to recover it. 00:31:08.073 [2024-11-25 13:05:47.945400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.073 [2024-11-25 13:05:47.945431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.073 qpair failed and we were unable to recover it. 00:31:08.073 [2024-11-25 13:05:47.945821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.073 [2024-11-25 13:05:47.945854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.073 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.946237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.946271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.946646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.946680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.947020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.947052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.947439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.947471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.947703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.947735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.947977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.948014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.948403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.948437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.948805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.948840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.949245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.949278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.949548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.949581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.949910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.949943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.950317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.950347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.950714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.950746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.951103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.951143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.951503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.951536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.951914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.951949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.345 [2024-11-25 13:05:47.952328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.345 [2024-11-25 13:05:47.952360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.345 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.952734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.952767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.952994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.953030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.953287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.953318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.953677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.953710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.953964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.953997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.954349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.954382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.954772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.954803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.955189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.955223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.955482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.955513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.955747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.955779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.955945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.955981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.956372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.956407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.956767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.956799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.957166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.957200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.957433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.957464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.957819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.957850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.958252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.958285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.958657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.958690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.959061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.959093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.959424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.959457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.959734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.959767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.960171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.960205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.960552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.960585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.960996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.961029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.961394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.961427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.961847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.961888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.962114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.962149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.962533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.962566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.962942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.346 [2024-11-25 13:05:47.962976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.346 qpair failed and we were unable to recover it. 00:31:08.346 [2024-11-25 13:05:47.963383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.963414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.963693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.963726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.964189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.964222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.964476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.964509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.964676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.964707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.965220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.965255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.965613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.965647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.965889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.965929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.966299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.966333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.966720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.966751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.967125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.967158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.967525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.967557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.967724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.967758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.968189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.968223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.968593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.968626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.968915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.968948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.969305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.969337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.969726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.969758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.970127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.970161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.970521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.970553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.971027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.971061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.971408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.971440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.971668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.971700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.971998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.972034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.972429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.972461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.972843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.972885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.973260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.973293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.973734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.973765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.974099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.974132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.347 qpair failed and we were unable to recover it. 00:31:08.347 [2024-11-25 13:05:47.974510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.347 [2024-11-25 13:05:47.974543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.974823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.974857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.975263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.975294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.975678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.975710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.976156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.976191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.976469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.976501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.976885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.976918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.977220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.977253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.977603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.977634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.978030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.978063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.978428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.978462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.978804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.978837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.979272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.979306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.979666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.979700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.980081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.980113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.980515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.980547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.980905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.980938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.981398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.981430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.981788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.981826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.982074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.982109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.982483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.982516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.982929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.982962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.983322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.983355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.983667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.983699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.984045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.984077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.984448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.984480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.984827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.984859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.985339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.985372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.985788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.348 [2024-11-25 13:05:47.985821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.348 qpair failed and we were unable to recover it. 00:31:08.348 [2024-11-25 13:05:47.986176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.986209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.986571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.986604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.986954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.986989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.987239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.987272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.987513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.987545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.987912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.987948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.988341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.988372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.988784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.988817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.989245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.989280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.989630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.989664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.990008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.990041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.990428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.990460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.990838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.990879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.991256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.991288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.991611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.991641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.992025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.992058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.992440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.992473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.992830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.992871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.993147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.993177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.993420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.993453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.993665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.993696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.994000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.994034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.994407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.994437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.994817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.994849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.995234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.995268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.995652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.995684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.996079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.996112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.996501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.349 [2024-11-25 13:05:47.996533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.349 qpair failed and we were unable to recover it. 00:31:08.349 [2024-11-25 13:05:47.996746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:47.996779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:47.997170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:47.997209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:47.997556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:47.997588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:47.997879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:47.997914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:47.998253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:47.998284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:47.998664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:47.998695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:47.998994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:47.999029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:47.999426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:47.999458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:47.999832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:47.999877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.000276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.000309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.000689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.000722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.001073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.001106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.001322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.001355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.001712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.001745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.002109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.002142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.002516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.002548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.002912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.002946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.003341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.003373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.003740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.003772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.004091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.004124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.004358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.004392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.004747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.004780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.005142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.005178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.005446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.005479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.005768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.005799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.006133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.006164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.006535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.006566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.006938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.006972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.007232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.007268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.007556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.007586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.007957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.007991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.350 [2024-11-25 13:05:48.008374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.350 [2024-11-25 13:05:48.008407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.350 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.008760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.008792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.009168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.009202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.009574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.009605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.009959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.009993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.010385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.010417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.010779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.010812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.011201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.011234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.011583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.011615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.011962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.011995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.012372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.012411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.012731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.012763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.013181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.013215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.013569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.013602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.013953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.013987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.014293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.014325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.014562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.014596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.014999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.015032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.015399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.015433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.015806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.015839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.016268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.016299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.016647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.016679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.017028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.017063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.017285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.017317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.017576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.351 [2024-11-25 13:05:48.017611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.351 qpair failed and we were unable to recover it. 00:31:08.351 [2024-11-25 13:05:48.017957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.017990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.018353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.018383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.018780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.018814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.019178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.019211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.019565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.019595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.019752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.019783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.020245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.020279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.020521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.020553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.020942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.020977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.021199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.021234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.021616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.021648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.021926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.021958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.022325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.022357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.022712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.022744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.023147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.023179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.023558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.023593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.023824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.023856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.024254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.024287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.024633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.024667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.025031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.025065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.025403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.025435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.025688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.025718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.026102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.026133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.026497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.026528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.026809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.026839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.027228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.027267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.027606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.027636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.028001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.028036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.352 [2024-11-25 13:05:48.028400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.352 [2024-11-25 13:05:48.028432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.352 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.028766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.028800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.029195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.029229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.029555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.029590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.029953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.029987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.030360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.030393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.030646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.030677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.030950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.030982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.031376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.031407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.031640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.031670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.032041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.032075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.032459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.032489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.032851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.032892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.033248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.033281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.033532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.033565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.033902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.033937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.034345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.034380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.034733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.034765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.035122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.035156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.035536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.035569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.035924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.035958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.036356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.036389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.036734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.036766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.037156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.037189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.037551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.037581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.037916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.037952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.038210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.038239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.038617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.038650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.353 [2024-11-25 13:05:48.039033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.353 [2024-11-25 13:05:48.039068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.353 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.039433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.039464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.039842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.039883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.040288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.040321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.040581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.040612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.040983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.041015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.041368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.041397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.041736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.041771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.042135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.042169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.042535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.042575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.042958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.042993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.043365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.043399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.043758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.043790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.043955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.043989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.044285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.044316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.044714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.044746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.045124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.045156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.045401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.045436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.045685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.045717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.046081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.046114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.046464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.046497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.046846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.046886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.047287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.047320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.047632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.047664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.048014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.048047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.048412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.048445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.048771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.048805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.049183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.049216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.049565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.049598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.049907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.049942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.354 [2024-11-25 13:05:48.050190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.354 [2024-11-25 13:05:48.050223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.354 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.050497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.050533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.050769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.050799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.051251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.051283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.051612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.051645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.051937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.051969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.052361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.052396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.052746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.052776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.053178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.053212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.053375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.053407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.053772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.053806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.054163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.054197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.054573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.054606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.054834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.054895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.055274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.055306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.055706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.055739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.056080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.056113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.056458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.056490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.056900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.056935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.057318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.057354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.057725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.057757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.058114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.058149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.058382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.058415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.058779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.058810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.059179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.059210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.059545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.059578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.059947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.059981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.060353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.060385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.060752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.060783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.061106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.061138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.061498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.061531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.061796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.355 [2024-11-25 13:05:48.061825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.355 qpair failed and we were unable to recover it. 00:31:08.355 [2024-11-25 13:05:48.062257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.062290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.062672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.062704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.063105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.063139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.063490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.063522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.063912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.063946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.064332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.064364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.064730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.064764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.065026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.065059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.065454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.065488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.065833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.065886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.066255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.066287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.066531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.066562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.066885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.066918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.067319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.067351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.067720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.067752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.068146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.068180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.068565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.068597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.069034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.069067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.069454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.069487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.069826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.069857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.070229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.070260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.070653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.070683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.071101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.071134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.071478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.071510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.071900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.071933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.072317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.072349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.072705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.072739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.073090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.073123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.356 [2024-11-25 13:05:48.073395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.356 [2024-11-25 13:05:48.073429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.356 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.073770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.073801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.074189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.074223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.074487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.074516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.074776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.074808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.075243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.075276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.075520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.075553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.075909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.075943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.076349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.076380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.076716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.076748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.076886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.076919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.077195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.077231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.077599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.077631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.077980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.078013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.078394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.078428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.078860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.078899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.079244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.079277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.079647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.079680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.079926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.079961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.080341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.080371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.080739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.080772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.081136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.081171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.081540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.081572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.357 [2024-11-25 13:05:48.081814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.357 [2024-11-25 13:05:48.081849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.357 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.082284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.082316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.082664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.082694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.083097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.083135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.083493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.083525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.083873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.083906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.084348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.084379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.084745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.084776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.085137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.085170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.085441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.085475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.085835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.085874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.086279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.086311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.086680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.086712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.086939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.086970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.087361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.087392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.087754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.087786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.088159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.088191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.088580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.088613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.088967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.089001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.089364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.089395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.089752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.089789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.090149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.090181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.090522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.090553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.090961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.090994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.091377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.091408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.091779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.091809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.092106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.092137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.092494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.092525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.092799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.092828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.093203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.093235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.358 [2024-11-25 13:05:48.093624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.358 [2024-11-25 13:05:48.093656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.358 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.094043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.094077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.094445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.094477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.094853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.094892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.095271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.095302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.095787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.095822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.096225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.096262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.096602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.096634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.097018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.097051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.097474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.097506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.097903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.097936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.098328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.098360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.098774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.098806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.099174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.099215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.099573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.099605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.100000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.100032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.100442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.100475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.100659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.100691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.100960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.100993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.101253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.101285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.101642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.101672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.102096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.102128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.102524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.102555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.102809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.102838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.103047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.103078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.103461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.103493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.103855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.103897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.104331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.104362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.104677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.104710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.359 [2024-11-25 13:05:48.105012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.359 [2024-11-25 13:05:48.105046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.359 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.105291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.105322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.105675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.105708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.105968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.106001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.106373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.106404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.106821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.106852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.107218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.107251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.107645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.107676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.107939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.107969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.108372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.108403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.108781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.108813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.109190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.109223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.109392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.109423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.109794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.109826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.110226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.110259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.110640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.110672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.111033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.111066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.111346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.111376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.111714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.111746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.112115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.112148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.112508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.112540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.112984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.113017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.113283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.113313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.113687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.113718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.113973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.114009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.114386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.114417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.114754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.114786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.115094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.115127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.115491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.115522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.115913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.115946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.116322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.116352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.116714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.360 [2024-11-25 13:05:48.116746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.360 qpair failed and we were unable to recover it. 00:31:08.360 [2024-11-25 13:05:48.117115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.117148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.117503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.117534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.117802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.117834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.118243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.118275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.118640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.118672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.119051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.119084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.119463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.119493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.119890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.119922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.120175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.120209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.120571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.120602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.121060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.121092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.121457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.121487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.121745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.121774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.122140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.122172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.122463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.122492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.122881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.122912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.123389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.123419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.123760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.123793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.123990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.124024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.124295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.124328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.124698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.124730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.125078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.125109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.125482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.125513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.125861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.125901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.126272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.126302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.126663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.126694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.127066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.127100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.127502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.127533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.127897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.127931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.128333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.128364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.128726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.128758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.129177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.129210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.129560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.129599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.129836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.129880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.130114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.130146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.130507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.130538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.130850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.130887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.131339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.131370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.131709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.131741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.132121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.132154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.132533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.361 [2024-11-25 13:05:48.132564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.361 qpair failed and we were unable to recover it. 00:31:08.361 [2024-11-25 13:05:48.132822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.132852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.133247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.133279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.133652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.133684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.133981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.134013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.134398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.134431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.134769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.134802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.135175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.135207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.135587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.135618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.136028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.136062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.136458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.136490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.136819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.136849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.137167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.137199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.137571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.137602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.137837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.137876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.138187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.138218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.138573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.138605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.138949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.138981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.139307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.139339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.139745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.139777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.140150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.140183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.140467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.140496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.140899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.140931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.141324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.141355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.141677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.141709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.141994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.142026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.142336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.142368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.142703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.142734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.142971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.143006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.143268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.143299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.143699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.143730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.143967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.143999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.144260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.144298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.144662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.362 [2024-11-25 13:05:48.144693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.362 qpair failed and we were unable to recover it. 00:31:08.362 [2024-11-25 13:05:48.145104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.145137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.145515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.145547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.145917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.145949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.146346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.146378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.146635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.146671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.147024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.147057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.147508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.147538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.147885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.147918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.148358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.148389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.148831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.148872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.149117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.149148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.149514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.149545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.149894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.149929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.150335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.150365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.150608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.150642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.150911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.150946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.151208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.151237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.151617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.151648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.152007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.152041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.152425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.152455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.152833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.152871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.153158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.153189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.153535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.153567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.154026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.154058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.154419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.154452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.154827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.154860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.155282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.155313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.155664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.155695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.156043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.156074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.156442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.156473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.156829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.156860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.157280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.157312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.157551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.157580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.157941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.157974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.158360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.158391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.158615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.158647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.158982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.159015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.159324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.363 [2024-11-25 13:05:48.159356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.363 qpair failed and we were unable to recover it. 00:31:08.363 [2024-11-25 13:05:48.159710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.159747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.160035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.160067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.160311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.160344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.160731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.160762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.161104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.161138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.161501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.161531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.161913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.161946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.162338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.162370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.162725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.162755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.163107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.163139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.163304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.163335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.163707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.163739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.164105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.164138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.164503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.164535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.164911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.164943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.165315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.165347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.165708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.165738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.165991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.166025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.166375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.166407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.166656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.166687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.166989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.167022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.167382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.167414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.167777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.167808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.168176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.168208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.168556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.168587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.168944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.168977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.169362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.169393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.169748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.169781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.170174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.170207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.170574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.170606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.170974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.171008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.171253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.171285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.171636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.171668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.172105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.172138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.172517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.172548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.172922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.172955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.173264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.173294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.173663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.364 [2024-11-25 13:05:48.173695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.364 qpair failed and we were unable to recover it. 00:31:08.364 [2024-11-25 13:05:48.174041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.174073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.174459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.174490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.174848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.174899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.175295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.175327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.175562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.175591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.175988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.176020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.176388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.176419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.176765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.176795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.177172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.177204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.177532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.177561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.177931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.177964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.178329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.178361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.178706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.178735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.179094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.179127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.179470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.179502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.179855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.179894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.180146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.180176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.180549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.180580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.180942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.180975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.181344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.181375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.181758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.181790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.182176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.182208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.182590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.182623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.182980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.183012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.183397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.183428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.183758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.183788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.184133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.184165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.184500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.184532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.184912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.184945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.185383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.185414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.185784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.185815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.186116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.186149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.186522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.186552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.186927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.186959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.187330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.187363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.187720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.187751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.188041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.188073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.365 [2024-11-25 13:05:48.188471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.365 [2024-11-25 13:05:48.188501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.365 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.188652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.188681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.188959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.188992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.189389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.189420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.189702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.189731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.189987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.190025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.190406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.190436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.190814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.190845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.191262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.191294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.191701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.191731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.192102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.192135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.192480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.192513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.192846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.192883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.193240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.193271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.193631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.193662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.194092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.194123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.194522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.194555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.194940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.194972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.195219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.195251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.195676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.195708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.195976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.196009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.196378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.196409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.196627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.196664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.196954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.196987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.197388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.197419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.197778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.197813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.198195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.198230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.198587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.198619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.198963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.198996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.199377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.199408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.199541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.199577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.199971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.200004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.200150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.200184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.200434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.200466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.200834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.200876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.201303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.201336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.201670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.201702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.202046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.202079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.366 [2024-11-25 13:05:48.202450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.366 [2024-11-25 13:05:48.202482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.366 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.202848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.202887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.203264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.203295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.203665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.203696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.204057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.204089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.204455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.204486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.204841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.204881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.205321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.205360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.205746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.205777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.206130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.206163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.206491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.206523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.206886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.206919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.207183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.207215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.207570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.207602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.208000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.208033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.208425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.208458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.208896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.208929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.209219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.209252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.209587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.209617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.209984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.210018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.210383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.210414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.210809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.210841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.211271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.211305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.211658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.211690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.212084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.212116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.212528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.212560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.212931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.212964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.213401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.213436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.213808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.213842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.214132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.214168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.214572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.214606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.214946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.214980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.215241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.367 [2024-11-25 13:05:48.215278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.367 qpair failed and we were unable to recover it. 00:31:08.367 [2024-11-25 13:05:48.215613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.215645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.216036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.216072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.216440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.216475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.216762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.216794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.217225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.217259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.217674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.217706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.218087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.218119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.218488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.218520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.218912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.218945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.219351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.219383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.219757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.219788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.220144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.220178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.220522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.220556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.220808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.220843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.221208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.221246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.221513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.221546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.221946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.221980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.222344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.222376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.222741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.222775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.223122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.223156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.223533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.223567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.223930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.223963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.224330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.224362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.224723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.224755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.225008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.225041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.225438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.225470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.225894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.225960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.226342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.226373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.226627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.226660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.226998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.227032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.227412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.227445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.227799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.227831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.228203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.228236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.228475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.228508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.228961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.228995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.229370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.229403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.229741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.229773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.230034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.368 [2024-11-25 13:05:48.230067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.368 qpair failed and we were unable to recover it. 00:31:08.368 [2024-11-25 13:05:48.230459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.230491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.230839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.230880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.231325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.231357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.231522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.231556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.231945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.231979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.232413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.232445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.232812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.232843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.233208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.233240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.233628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.233662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.233904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.233938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.234322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.234354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.234759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.234790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.235094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.235127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.235492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.235524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.369 [2024-11-25 13:05:48.235964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.369 [2024-11-25 13:05:48.235997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.369 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.236373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.236406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.236760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.236801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.237257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.237290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.237684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.237714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.238051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.238085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.238471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.238503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.238891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.238925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.239352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.239385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.239773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.239806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.240205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.240239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.240487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.240520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.240819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.240850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.241232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.241264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.241623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.241656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.242012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.242045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.242446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.242479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.242852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.242894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.243277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.243309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.243753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.243786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.244191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.244224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.244503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.244534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.244894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.244929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.245311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.245343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.245715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.245747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.246112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.246145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.246518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.246550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.246910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.246943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.247334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.247365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.247750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.641 [2024-11-25 13:05:48.247784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.641 qpair failed and we were unable to recover it. 00:31:08.641 [2024-11-25 13:05:48.248139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.248172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.248542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.248574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.248942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.248975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.249247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.249278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.249623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.249655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.250049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.250083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.250483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.250516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.250888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.250921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.251331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.251361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.251716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.251747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.252115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.252148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.252480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.252512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.252766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.252803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.253226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.253260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.253666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.253699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.254084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.254117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.254372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.254403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.254644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.254676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.255020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.255053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.255288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.255319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.255744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.255775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.256016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.256049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.256311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.256342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.256641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.256671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.257040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.257073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.257404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.257434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.257701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.257732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.258110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.258143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.258437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.258468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.258819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.258850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.259232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.259264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.259664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.259696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.260050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.260085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.260444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.260476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.260848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.260901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.261306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.261339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.261684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.261715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.262087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.642 [2024-11-25 13:05:48.262120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.642 qpair failed and we were unable to recover it. 00:31:08.642 [2024-11-25 13:05:48.262385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.262416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.262837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.262876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.263240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.263270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.263626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.263658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.263949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.263981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.264352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.264383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.264659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.264689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.264961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.264993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.265357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.265388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.265760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.265792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.266041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.266078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.266434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.266467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.266720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.266751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.267035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.267067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.267319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.267350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.267712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.267744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.268151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.268182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.268584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.268614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.268960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.268992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.269350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.269380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.269753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.269784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.270059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.270092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.270464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.270495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.270742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.270773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.271146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.271178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.271555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.271586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.271761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.271791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.272075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.272107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.272494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.272525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.272895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.272926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.273299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.273330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.273689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.273720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.274161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.274194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.274469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.274499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.274869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.274902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.275325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.275356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.275747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.275778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.643 qpair failed and we were unable to recover it. 00:31:08.643 [2024-11-25 13:05:48.276207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.643 [2024-11-25 13:05:48.276239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.276426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.276458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.276893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.276926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.277299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.277332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.277705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.277742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.278043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.278076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.278453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.278483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.278885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.278918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.279304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.279335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.279688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.279720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.279988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.280021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.280393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.280424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.280772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.280802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.281173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.281206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.281574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.281605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.281955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.281987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.282240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.282271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.282663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.282694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.282949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.282981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.283336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.283367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.283709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.283739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.284135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.284168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.284517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.284549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.284921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.284954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.285383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.285413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.285743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.285773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.286128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.286161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.286528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.286559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.286932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.286965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.287337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.287368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.287659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.287689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.288063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.288096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.288494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.288525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.288882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.288914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.289219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.289250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.289624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.289655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.290010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.290043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.290265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.644 [2024-11-25 13:05:48.290296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.644 qpair failed and we were unable to recover it. 00:31:08.644 [2024-11-25 13:05:48.290660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.290690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.291111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.291143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.291558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.291589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.291932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.291965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.292329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.292359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.292752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.292782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.293135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.293173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.293517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.293548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.293893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.293926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.294321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.294352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.294756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.294786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.295127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.295159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.295602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.295633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.295913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.295944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.296353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.296384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.296744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.296775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.297186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.297217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.297588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.297620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.297999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.298032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.298323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.298354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.298637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.298667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.298948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.298980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.299368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.299398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.299688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.299718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.300072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.300105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.300363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.300394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.300777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.300808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.301041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.301076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.301457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.301488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.301859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.301901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.302327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.302359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.302688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.645 [2024-11-25 13:05:48.302719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.645 qpair failed and we were unable to recover it. 00:31:08.645 [2024-11-25 13:05:48.303084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.303116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.303382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.303414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.303740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.303771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.304178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.304210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.304564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.304594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.304966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.304998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.305377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.305408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.305813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.305845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.306247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.306278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.306647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.306677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.307037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.307070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.307248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.307279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.307706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.307739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.307991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.308024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.308387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.308423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.308660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.308691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.309052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.309084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.309473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.309504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.309874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.309906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.310282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.310313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.310677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.310710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.311067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.311100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.311351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.311386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.311741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.311772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.312170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.312202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.312547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.312578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.312972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.313005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.313390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.313421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.313783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.313816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.314155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.314187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.314558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.314590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.314951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.314985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.315377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.315408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.315776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.315807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.316183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.316215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.316581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.316612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.316994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.317026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.317414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.646 [2024-11-25 13:05:48.317445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.646 qpair failed and we were unable to recover it. 00:31:08.646 [2024-11-25 13:05:48.317790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.317821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.318179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.318213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.318499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.318529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.318919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.318954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.319349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.319381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.319732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.319763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.320178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.320211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.320458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.320489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.320758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.320789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.321180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.321213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.321626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.321657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.322015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.322048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.322397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.322427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.322717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.322748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.323129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.323161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.323546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.323577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.323935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.323979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.324365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.324396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.324766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.324798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.325110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.325142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.325511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.325542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.325898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.325929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.326310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.326340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.326697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.326728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.327165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.327196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.327551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.327581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.327997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.328029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.328322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.328352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.328737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.328769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.329169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.329201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.329443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.329475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.329881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.329930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.330327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.330357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.330605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.330635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.331003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.331035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.331380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.331411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.331823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.331853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.332295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.647 [2024-11-25 13:05:48.332327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.647 qpair failed and we were unable to recover it. 00:31:08.647 [2024-11-25 13:05:48.332602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.332634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.333002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.333035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.333486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.333516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.333881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.333913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.334291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.334323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.334670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.334702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.335050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.335081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.335474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.335505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.335881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.335914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.336184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.336217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.336475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.336505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.336820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.336852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.337250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.337282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.337625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.337656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.338016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.338049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.338402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.338433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.338880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.338912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.339312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.339343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.339720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.339757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.339930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.339964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.340338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.340368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.340661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.340692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.341051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.341084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.341342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.341373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.341605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.341639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.342000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.342033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.342418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.342449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.342747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.342778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.343188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.343220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.343470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.343503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.343892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.343923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.344288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.344319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.344685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.344717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.344996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.345029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.345400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.345431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.345882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.648 [2024-11-25 13:05:48.345915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.648 qpair failed and we were unable to recover it. 00:31:08.648 [2024-11-25 13:05:48.346291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.346321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.346576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.346608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.346989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.347019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.347399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.347429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.347799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.347829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.348162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.348193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.348535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.348565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.348915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.348948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.349342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.349371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.349772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.349803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.350183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.350216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.350641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.350671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.350925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.350957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.351356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.351388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.351756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.351787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.351954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.351988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.352278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.352310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.352663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.352694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.353021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.353054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.353440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.353472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.353853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.353896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.354283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.354314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.354662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.354699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.355068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.355101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.355474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.355504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.355884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.355917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.356190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.356221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.356569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.356600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.356985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.357019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.357411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.357441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.357762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.357792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.358162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.358196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.358554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.358585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.358933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.358965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.359305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.359337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.359608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.359638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.360012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.360044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.649 qpair failed and we were unable to recover it. 00:31:08.649 [2024-11-25 13:05:48.360447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.649 [2024-11-25 13:05:48.360480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.360817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.360848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.361251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.361284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.361628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.361658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.362022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.362056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.362387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.362420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.362770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.362802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.363205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.363237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.363638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.363668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.363824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.363857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.364151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.364184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.364426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.364456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.364796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.364827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.365211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.365244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.365592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.365623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.365996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.366027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.366300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.366331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.366676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.366706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.367060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.367091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.367446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.367478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.367774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.367805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.368175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.368211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.368565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.368595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.368771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.368800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.369219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.369251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.369620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.369656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.369910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.369942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.370305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.370335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.370579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.370609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.370834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.370875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.371272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.371303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.371678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.371707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.650 [2024-11-25 13:05:48.371972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.650 [2024-11-25 13:05:48.372004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.650 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.372230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.372260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.372651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.372681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.373047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.373079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.373351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.373381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.373764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.373795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.374145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.374176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.374535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.374565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.374779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.374809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.375192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.375224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.375585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.375615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.375906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.375936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.376292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.376322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.376691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.376720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.377102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.377133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.377362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.377395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.377746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.377776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.378131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.378160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.378532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.378561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.378935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.378965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.379250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.379279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.379653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.379682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.380040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.380071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.380453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.380483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.380932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.380962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.381321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.381351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.381699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.381727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.382140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.382170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.382535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.382565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.382950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.382980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.383352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.383382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.383749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.383777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.384174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.384204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.384564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.384598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.384890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.384920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.385290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.385319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.385560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.385592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.385860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.385900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.651 [2024-11-25 13:05:48.386179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.651 [2024-11-25 13:05:48.386208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.651 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.386446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.386474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.386852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.386889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.387237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.387267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.387624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.387654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.388012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.388043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.388289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.388318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.388703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.388732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.389116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.389146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.389490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.389520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.389886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.389917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.390299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.390328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.390689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.390717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.391082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.391112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.391419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.391447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.391857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.391894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.392277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.392307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.392566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.392594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.392886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.392916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.393326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.393354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.393722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.393750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.394098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.394128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.394493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.394522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.394898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.394929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.395283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.395311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.395690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.395719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.396116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.396147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.396523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.396552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.396906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.396937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.397292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.397320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.397575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.397607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.397882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.397913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.398311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.398339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.398701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.398729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.399129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.399160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.399511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.399551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.399782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.399811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.400228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.652 [2024-11-25 13:05:48.400258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.652 qpair failed and we were unable to recover it. 00:31:08.652 [2024-11-25 13:05:48.400619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.400647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.400908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.400938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.401304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.401333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.401573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.401606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.401954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.401986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.402270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.402299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.402693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.402721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.403102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.403131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.403509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.403538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.403913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.403943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.404202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.404232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.404696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.404726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.405136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.405165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.405526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.405554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.405855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.405893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.406250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.406280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.406636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.406664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.406894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.406929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.407312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.407341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.407700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.407728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.408140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.408170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.408493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.408522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.408963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.408993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.409224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.409253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.409629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.409659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.410035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.410066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.410448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.410476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.410838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.410877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.411257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.411288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.411674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.411703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.412073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.412104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.412455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.412484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.412880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.412911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.413304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.413333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.413701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.413731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.414145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.414176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.414535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.414564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.414927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.653 [2024-11-25 13:05:48.414963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.653 qpair failed and we were unable to recover it. 00:31:08.653 [2024-11-25 13:05:48.415269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.415298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.415659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.415687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.416052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.416082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.416508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.416537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.416891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.416921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.417250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.417278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.417654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.417682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.418008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.418039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.418412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.418442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.418809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.418839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.419271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.419301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.419537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.419567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.419830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.419872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.420263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.420293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.420670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.420703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.421082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.421113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.421475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.421504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.421668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.421696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.422173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.422202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.422459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.422487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.422902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.422933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.423324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.423354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.423725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.423753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.424154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.424184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.424530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.424558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.424927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.424958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.425314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.425344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.425724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.425753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.426140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.426171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.426554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.426584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.426947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.426979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.427301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.427330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.427691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.427720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.428141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.428170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.428533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.428562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.428815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.428848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.429209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.654 [2024-11-25 13:05:48.429239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.654 qpair failed and we were unable to recover it. 00:31:08.654 [2024-11-25 13:05:48.429493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.429522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.429904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.429937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.430291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.430326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.430679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.430709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.431054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.431084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.431315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.431346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.431726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.431754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.432101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.432131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.432482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.432510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.432794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.432824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.433249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.433278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.433618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.433647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.434028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.434059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.434412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.434442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.434806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.434834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.435079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.435108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.435458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.435487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.435898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.435930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.436305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.436335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.436733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.436763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.437156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.437187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.437553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.437581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.437948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.437979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.438338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.438367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.438733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.438761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.439061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.439092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.439503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.439533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.439883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.439914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.440287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.440316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.440680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.440711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.441042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.441072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.441402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.441430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.441663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.441695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.441958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.441998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.655 [2024-11-25 13:05:48.442366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.655 [2024-11-25 13:05:48.442395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.655 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.442703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.442731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.443101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.443132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.443495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.443525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.443899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.443933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.444314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.444345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.444705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.444734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.445132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.445162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.445494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.445530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.445889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.445922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.446254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.446283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.446650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.446679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.446937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.446967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.447326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.447355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.447735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.447764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.448150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.448180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.448545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.448574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.448941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.448973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.449342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.449371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.449741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.449770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.450116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.450148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.450509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.450537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.450885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.450918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.451327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.451358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.451674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.451702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.452077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.452107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.452467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.452497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.452884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.452916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.453162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.453193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.453617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.453646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.454012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.454043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.454297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.454329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.454755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.454784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.455133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.455164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.455509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.455538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.455903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.455934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.456192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.456220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.456571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.456600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.656 [2024-11-25 13:05:48.456970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.656 [2024-11-25 13:05:48.457000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.656 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.457360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.457389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.457743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.457772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.458115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.458146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.458489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.458519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.458888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.458919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.459292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.459320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.459736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.459764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.460147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.460177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.460408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.460439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.460807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.460842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.461228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.461259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.461608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.461637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.462000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.462031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.462399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.462427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.462797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.462826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.463227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.463257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.463632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.463661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.464022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.464053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.464413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.464442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.464810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.464838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.465221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.465251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.465506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.465539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.465923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.465953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.466327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.466356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.466720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.466749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.467139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.467171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.467603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.467633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.468002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.468033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.468424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.468453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.468706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.468734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.468969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.468999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.469393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.469422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.469788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.469817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.470202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.470232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.470572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.470601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.470984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.471015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.471382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.471417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.657 [2024-11-25 13:05:48.471783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.657 [2024-11-25 13:05:48.471813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.657 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.472213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.472245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.472607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.472635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.473027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.473057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.473435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.473464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.473819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.473849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.474274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.474305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.474666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.474696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.475053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.475085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.475436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.475465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.475885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.475916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.476282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.476312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.476676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.476706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.476997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.477029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.477394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.477424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.477811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.477841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.478220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.478251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.478619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.478649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.479015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.479048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.479457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.479487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.479851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.479894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.480277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.480306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.480666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.480695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.480924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.480954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.481318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.481348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.481717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.481746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.482105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.482135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.482493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.482522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.482914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.482945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.483320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.483350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.483606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.483637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.483922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.483952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.484310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.484338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.484686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.484715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.485081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.485113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.485515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.485543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.485920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.485950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.486364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.658 [2024-11-25 13:05:48.486394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.658 qpair failed and we were unable to recover it. 00:31:08.658 [2024-11-25 13:05:48.486801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.486830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.487220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.487258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.487639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.487668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.488031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.488062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.488313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.488343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.488734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.488763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.489151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.489182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.489549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.489578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.489945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.489974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.490221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.490249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.490619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.490649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.490999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.491029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.491464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.491492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.491721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.491752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.492124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.492155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.492525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.492554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.492917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.492950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.493326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.493354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.493720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.493749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.494012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.494043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.494353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.494381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.494734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.494762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.495152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.495182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.495391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.495424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.495800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.495830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.496215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.496246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.496656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.496685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.497047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.497077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.497452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.497482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.497834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.497874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.498247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.498276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.498649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.498677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.499046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.499077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.659 qpair failed and we were unable to recover it. 00:31:08.659 [2024-11-25 13:05:48.499468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.659 [2024-11-25 13:05:48.499497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.499833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.499872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.500219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.500248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.500601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.500629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.501015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.501046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.501408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.501436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.501740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.501768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.502121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.502153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.502513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.502548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.502916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.502948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.503345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.503373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.503667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.503696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.504054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.504084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.504465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.504494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.504879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.504911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.505290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.505319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.505683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.505712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.506098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.506129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.506509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.506539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.506914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.506946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.507253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.507283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.507645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.507674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.508078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.508109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.508480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.508508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.508883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.508913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.509149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.509178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.509543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.509571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.509929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.509961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.510322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.510352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.510742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.510770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.511202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.511232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.511601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.511631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.511884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.511914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.512262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.512290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.512662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.512692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.513026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.513056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.513344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.513373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.513626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.513655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.660 qpair failed and we were unable to recover it. 00:31:08.660 [2024-11-25 13:05:48.514007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.660 [2024-11-25 13:05:48.514038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.514402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.514431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.514684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.514716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.515066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.515097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.515427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.515455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.515788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.515816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.516205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.516236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.516605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.516636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.517010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.517041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.517418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.517447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.517658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.517694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.518092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.518123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.518472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.518501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.518880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.518912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.519269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.519298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.519667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.519696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.520067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.520098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.520473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.520502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.520752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.520785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.521126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.521159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.521587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.521616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.521979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.522010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.522276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.522304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.522702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.522731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.523104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.523135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.523501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.523531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.523903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.523933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.524295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.524325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.524692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.524721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.525072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.525102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.525468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.525499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.525835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.525875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.526301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.526333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.526695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.526725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.527065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.527096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.527462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.527492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.527816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.527845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.528234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.661 [2024-11-25 13:05:48.528265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.661 qpair failed and we were unable to recover it. 00:31:08.661 [2024-11-25 13:05:48.528699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-25 13:05:48.528728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-25 13:05:48.529117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-25 13:05:48.529148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-25 13:05:48.529493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-25 13:05:48.529523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-25 13:05:48.529893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-25 13:05:48.529923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-25 13:05:48.530304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-25 13:05:48.530332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-25 13:05:48.530706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-25 13:05:48.530734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-25 13:05:48.531125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-25 13:05:48.531156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-25 13:05:48.531508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-25 13:05:48.531537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-25 13:05:48.531907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-25 13:05:48.531937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.662 qpair failed and we were unable to recover it. 00:31:08.662 [2024-11-25 13:05:48.532290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.662 [2024-11-25 13:05:48.532319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.532561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.532596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.532959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.532989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.533354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.533389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.533762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.533791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.534072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.534102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.534483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.534511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.534882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.534913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.535261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.535291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.535670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.535699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.536048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.536080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.536461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.536491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.536849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.536891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.537249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.537277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.537639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.537667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.538066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.538097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.935 [2024-11-25 13:05:48.538452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.935 [2024-11-25 13:05:48.538480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.935 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.538836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.538874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.539131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.539160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.539520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.539549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.539917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.539949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.540315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.540344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.540710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.540739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.541134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.541165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.541540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.541569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.541937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.541967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.542355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.542383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.542759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.542787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.543120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.543149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.543512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.543541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.543776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.543807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.544192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.544223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.544563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.544592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.544961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.544991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.545338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.545366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.545738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.545766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.546165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.546195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.546490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.546519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.546885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.546915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.547345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.547373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.547750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.547780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.548090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.548120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.548501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.548530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.548885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.548923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.549328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.549356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.549717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.549747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.936 [2024-11-25 13:05:48.550110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.936 [2024-11-25 13:05:48.550140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.936 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.550408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.550437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.550829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.550858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.551205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.551235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.551597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.551625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.552058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.552089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.552498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.552526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.552776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.552808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.553168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.553199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.553574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.553603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.553987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.554017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.554397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.554427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.554780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.554808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.555188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.555218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.555598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.555627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.555984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.556014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.556361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.556390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.556763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.556791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.557179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.557209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.557571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.557600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.557961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.557991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.558269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.558298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.558663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.558691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.559101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.559130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.559480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.559509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.559902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.559934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.560304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.560334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.560645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.560674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.561054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.561084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.561461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.937 [2024-11-25 13:05:48.561490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.937 qpair failed and we were unable to recover it. 00:31:08.937 [2024-11-25 13:05:48.561849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.561907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.562358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.562387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.562764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.562792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.563178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.563209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.563576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.563605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.563858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.563906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.564368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.564397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.564776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.564811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.565223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.565253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.565688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.565718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.565946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.565978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.566354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.566383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.566649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.566677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.567063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.567094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.567446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.567474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.567838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.567875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.568177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.568207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.568564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.568593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.568961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.568991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.569356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.569385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.569774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.569803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.570198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.570228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.570640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.570669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.571053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.571085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.571489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.571518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.571773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.571802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.572158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.572188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.572453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.572482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.572877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.572908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.573290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.938 [2024-11-25 13:05:48.573319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.938 qpair failed and we were unable to recover it. 00:31:08.938 [2024-11-25 13:05:48.573683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.573712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.573957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.573991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.574355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.574384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.574749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.574778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.575134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.575165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.575543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.575574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.575881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.575912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.576272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.576301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.576674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.576702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.576952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.576982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.577227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.577258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.577645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.577674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.578037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.578068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.578434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.578462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.578826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.578854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.579245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.579275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.579620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.579649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.580059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.580097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.580477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.580507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.580762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.580791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.581152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.581183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.581585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.581614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.581898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.581928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.582311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.582340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.582706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.582735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.583088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.583118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.583463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.583492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.939 [2024-11-25 13:05:48.583883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.939 [2024-11-25 13:05:48.583914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.939 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.584260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.584289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.584652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.584681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.585041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.585072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.585434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.585464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.585826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.585855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.586152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.586181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.586421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.586452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.586888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.586919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.587304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.587332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.587688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.587716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.588093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.588124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.588472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.588501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.588878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.588910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.589282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.589311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.589687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.589716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.590061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.590093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.590480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.590510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.590881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.590911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.591282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.591311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.591679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.591708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.592092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.592122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.592481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.592510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.592881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.592911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.593271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.593301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.593668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.593698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.594074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.594106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.594480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.594509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.594897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.594927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.595291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.595319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.940 [2024-11-25 13:05:48.595678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.940 [2024-11-25 13:05:48.595714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.940 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.596039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.596069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.596385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.596413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.596807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.596836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.597087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.597119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.597500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.597528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.597893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.597924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.598276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.598305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.598650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.598679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.599062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.599093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.599457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.599486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.599855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.599899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.600218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.600247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.600661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.600689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.601051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.601082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.601469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.601498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.601882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.601914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.602326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.602355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.602596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.602624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.602987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.603018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.603358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.603387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.603774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.603802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.604165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.604195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.941 qpair failed and we were unable to recover it. 00:31:08.941 [2024-11-25 13:05:48.604419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.941 [2024-11-25 13:05:48.604450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.604840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.604878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.605235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.605264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.605631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.605659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.606005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.606036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.606398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.606427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.606775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.606803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.607176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.607206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.607550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.607580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.607844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.607886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.608274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.608304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.608670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.608699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.609058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.609088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.609452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.609481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.609761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.609790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.610141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.610171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.610537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.610566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.610939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.610975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.611313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.611342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.611742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.611770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.612114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.612144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.612509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.612539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.612894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.612925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.613285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.613314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.613676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.613705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.614121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.614151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.614511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.614539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.614808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.614836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.615197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.615228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.615592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.615621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.942 [2024-11-25 13:05:48.615992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.942 [2024-11-25 13:05:48.616022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.942 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.616376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.616405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.616768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.616797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.617175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.617206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.617577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.617606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.617975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.618005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.618361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.618391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.618734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.618763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.619165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.619195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.619542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.619572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.619931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.619961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.620335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.620364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.620715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.620743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.621105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.621135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.621504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.621534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.621905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.621935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.622297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.622325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.622585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.622614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.623032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.623063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.623379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.623409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.623657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.623689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.624059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.624089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.624449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.624478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.624830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.624859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.625222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.625251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.625498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.625526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.625886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.625917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.626282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.626317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.626725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.626754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.627084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.943 [2024-11-25 13:05:48.627115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.943 qpair failed and we were unable to recover it. 00:31:08.943 [2024-11-25 13:05:48.627379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.627408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.627758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.627786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.628173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.628203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.628560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.628589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.628954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.628984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.629327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.629355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.629661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.629689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.630064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.630095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.630437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.630465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.630822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.630851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.631114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.631146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.631533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.631562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.631914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.631945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.632331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.632359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.632768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.632797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.633161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.633192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.633555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.633583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.633991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.634022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.634410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.634439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.634807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.634836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.635224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.635256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.635622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.635652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.635994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.636024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.636397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.636426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.636789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.636819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.637203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.637234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.637605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.637634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.638010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.638042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.638410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.944 [2024-11-25 13:05:48.638439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.944 qpair failed and we were unable to recover it. 00:31:08.944 [2024-11-25 13:05:48.638804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.638834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.639173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.639203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.639569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.639598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.639871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.639901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.640271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.640300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.640679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.640708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.641086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.641116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.641373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.641403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.641788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.641823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.642273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.642304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.642660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.642690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.643051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.643082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.643445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.643474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.643857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.643895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.644268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.644296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.644663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.644692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.644945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.644978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.645332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.645361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.645727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.645757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.646126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.646157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.646496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.646525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.646914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.646945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.647318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.647348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.647711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.647740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.648140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.648170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.648580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.648608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.648972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.649002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.649264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.649293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.649690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.649719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.945 qpair failed and we were unable to recover it. 00:31:08.945 [2024-11-25 13:05:48.650083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.945 [2024-11-25 13:05:48.650113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.650463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.650492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.650854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.650894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.651286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.651315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.651679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.651708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.652083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.652114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.652374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.652403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.652646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.652678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.653064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.653094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.653461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.653489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.653854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.653892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.654234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.654263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.654593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.654621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.654928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.654958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.655323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.655352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.655666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.655695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.656037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.656068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.656433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.656462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.656890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.656921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.657254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.657290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.657646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.657676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.658008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.658037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.658422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.658451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.658812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.658840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.659121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.659153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.659537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.659566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.659933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.659965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.660329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.660357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.660728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.660757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.946 qpair failed and we were unable to recover it. 00:31:08.946 [2024-11-25 13:05:48.661120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.946 [2024-11-25 13:05:48.661150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.661490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.661519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.661883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.661914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.662293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.662322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.662587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.662616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.662988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.663018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.663268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.663300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.663651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.663681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.664049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.664080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.664397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.664426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.664778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.664806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.665204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.665235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.665611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.665640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.666003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.666033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.666391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.666419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.666765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.666795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.667055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.667085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.667432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.667467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.667833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.667872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.668246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.668275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.668640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.668669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.669037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.669067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.669430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.669458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.947 [2024-11-25 13:05:48.669823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.947 [2024-11-25 13:05:48.669852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.947 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.670100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.670130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.670498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.670526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.670894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.670924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.671235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.671264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.671630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.671658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.671999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.672029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.672408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.672437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.672716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.672744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.673111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.673142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.673379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.673410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.673665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.673695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.674041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.674071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.674436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.674464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.674832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.674860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.675259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.675289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.675663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.675691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.676050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.676080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.676442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.676471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.676842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.676881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.677110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.677142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.677513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.677542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.677904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.677935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.678306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.678335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.678585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.678617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.678979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.679009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.679371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.679400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.679663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.679692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.680070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.680101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.680469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.680498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.680770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.680798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.681189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.948 [2024-11-25 13:05:48.681220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.948 qpair failed and we were unable to recover it. 00:31:08.948 [2024-11-25 13:05:48.681551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.681580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.681922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.681953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.682311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.682346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.682709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.682739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.683098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.683128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.683481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.683510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.683881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.683911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.684279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.684308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.684671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.684699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.685066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.685096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.685461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.685489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.685852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.685896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 836454 Killed "${NVMF_APP[@]}" "$@" 00:31:08.949 [2024-11-25 13:05:48.686315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.686347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.686691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.686720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.687070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:08.949 [2024-11-25 13:05:48.687101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.687460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:08.949 [2024-11-25 13:05:48.687491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:08.949 [2024-11-25 13:05:48.687859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.687900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.949 [2024-11-25 13:05:48.688255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.688286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.688581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.688611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.688898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.688932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.689353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.689384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.689738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.689767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.690133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.690164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.690535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.690564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.690935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.690967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.691328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.691357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.691707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.691745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.949 [2024-11-25 13:05:48.692097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.949 [2024-11-25 13:05:48.692129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.949 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.692492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.692521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.692882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.692915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.693272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.693302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.693669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.693699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.694070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.694101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.694456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.694486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.694857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.694901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.695044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.695078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.695417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.695447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.695799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.695828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=837486 00:31:08.950 [2024-11-25 13:05:48.696228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.696263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 837486 00:31:08.950 [2024-11-25 13:05:48.696504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.696558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:08.950 [2024-11-25 13:05:48.696933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 837486 ']' 00:31:08.950 [2024-11-25 13:05:48.696968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.950 [2024-11-25 13:05:48.697340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.697372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.950 [2024-11-25 13:05:48.697728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.950 [2024-11-25 13:05:48.697759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.950 [2024-11-25 13:05:48.698121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.698155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 13:05:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:08.950 [2024-11-25 13:05:48.698558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.698589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.698955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.698987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.699347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.699378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.699749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.699779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.700128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.700166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.700561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.700592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.700949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.700983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.701237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.701270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.950 [2024-11-25 13:05:48.701650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.950 [2024-11-25 13:05:48.701682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.950 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.702043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.702076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.702306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.702340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.702723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.702754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.703101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.703133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.703273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.703308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.703682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.703713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.704059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.704091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.704469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.704501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.704859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.704903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.705272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.705303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.705680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.705710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.706071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.706104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.706499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.706530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.706888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.706920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.707166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.707196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.707582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.707613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.707989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.708019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.708386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.708416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.708789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.708820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.709183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.709214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.709564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.709594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.709966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.709997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.710374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.710403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.710762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.710791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.711128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.711159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.711517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.711546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.711923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.711953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.712318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.712347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.712708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.712739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.951 [2024-11-25 13:05:48.713094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.951 [2024-11-25 13:05:48.713124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.951 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.713488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.713517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.713884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.713916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.714281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.714310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.714671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.714700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.715070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.715101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.715474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.715509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.715945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.715975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.716351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.716380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.716739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.716767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.717017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.717051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.717458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.717487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.717848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.717907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.718276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.718307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.718670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.718700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.719062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.719093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.719445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.719473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.719830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.719860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.720289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.720318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.720682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.720711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.721096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.721127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.721496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.721525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.721892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.721922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.722318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.722347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.722719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.722747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.723139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.723169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.723529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.723558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.723926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.723956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.952 [2024-11-25 13:05:48.724315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.952 [2024-11-25 13:05:48.724344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.952 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.724712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.724741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.725137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.725168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.725540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.725570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.725945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.725977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.726365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.726395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.726737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.726766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.727013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.727044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.727390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.727419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.727800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.727830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.728254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.728285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.728655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.728685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.729051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.729081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.729456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.729486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.729849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.729893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.730241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.730272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.730655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.730686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.731029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.731061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.731424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.731461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.731820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.731850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.732146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.732177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.732428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.732462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.732850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.732906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.733296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.733326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.733694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.733723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.733898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.733932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.953 [2024-11-25 13:05:48.734362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.953 [2024-11-25 13:05:48.734391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.953 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.734723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.734752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.735090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.735121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.735459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.735489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.735860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.735899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.736251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.736280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.736644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.736674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.736930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.736960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.737329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.737358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.737723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.737752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.738142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.738173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.738530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.738560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.738939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.738971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.739344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.739373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.739718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.739747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.740138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.740167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.740531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.740560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.740929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.740959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.741337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.741366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.741718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.741748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.741988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.742022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.742257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.742289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.742651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.742680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.742926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.742958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.743324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.743354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.743720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.743749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.744113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.744144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.744516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.744546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.744882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.744913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.745288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.745320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.745681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.745711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.954 qpair failed and we were unable to recover it. 00:31:08.954 [2024-11-25 13:05:48.746105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.954 [2024-11-25 13:05:48.746137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.746563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.746600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.746824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.746856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.747260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.747291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.747684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.747714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.748102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.748134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.748525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.748555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.748935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.748967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.749372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.749402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.749772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.749801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.750167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.750198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.750546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.750576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.750954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.750984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.751333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.751362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.751731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.751761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.752026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.752060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.752441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.752471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.752810] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:31:08.955 [2024-11-25 13:05:48.752891] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.955 [2024-11-25 13:05:48.752909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.752942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.753354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.753383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.753730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.753758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.754130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.754161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.754532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.754562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.754966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.754997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.755354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.755384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.755640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.755670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.755902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.755936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.756349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.756380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.756614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.756648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.757031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.757064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.757435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.757466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.757821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.955 [2024-11-25 13:05:48.757853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.955 qpair failed and we were unable to recover it. 00:31:08.955 [2024-11-25 13:05:48.758209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.758241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.758593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.758622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.758882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.758913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.759279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.759310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.759743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.759772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.760116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.760149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.760515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.760545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.760906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.760938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.761305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.761335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.761703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.761735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.762097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.762129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.762354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.762384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.762750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.762781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.763008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.763041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.763272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.763302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.763659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.763688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.763918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.763952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.764206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.764237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.764574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.764604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.764966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.764997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.765355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.765386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.765710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.765741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.766093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.766132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.766523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.766553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.766952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.766984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.767332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.767362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.767705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.767736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.768115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.768147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.768499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.768529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.768905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.768938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.769304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.769335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.956 qpair failed and we were unable to recover it. 00:31:08.956 [2024-11-25 13:05:48.769687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.956 [2024-11-25 13:05:48.769717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.770099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.770131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.770370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.770400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.770785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.770814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.771073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.771104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.771463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.771495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.771852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.771895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.772289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.772320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.772567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.772601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.772959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.772990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.773376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.773406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.773765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.773793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.774168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.774199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.774469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.774499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.774877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.774908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.775291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.775321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.775681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.775711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.775996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.776027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.776417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.776448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.776811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.776840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.777219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.777250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.777623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.777652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.778025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.778058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.778446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.778476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.778850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.778892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.779271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.779301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.779621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.779650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.779898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.779931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.780380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.780410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.957 [2024-11-25 13:05:48.780672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.957 [2024-11-25 13:05:48.780702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.957 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.781091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.781122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.781486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.781523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.781878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.781909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.782244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.782274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.782643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.782673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.783039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.783071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.783444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.783472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.783849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.783890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.784337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.784366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.784733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.784762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.785106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.785137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.785489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.785518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.785886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.785917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.786276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.786306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.786669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.786698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.787070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.787103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.787475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.787505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.787850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.787891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.788238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.788268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.788642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.788671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.789048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.789079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.789455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.789485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.958 [2024-11-25 13:05:48.789843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.958 [2024-11-25 13:05:48.789886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.958 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.790246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.790275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.790642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.790671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.791057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.791089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.791434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.791463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.791769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.791798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.792156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.792188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.792541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.792570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.792948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.792978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.793293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.793322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.793636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.793667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.793999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.794029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.794395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.794424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.794792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.794822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.795252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.795282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.795648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.795677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.796051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.796083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.796475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.796504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.796857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.796903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.797264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.797300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.797646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.797676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.798044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.798074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.798453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.798483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.798876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.798908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.799198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.799232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.799609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.799637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.800035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.800066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.800453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.800483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.800755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.800784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.959 qpair failed and we were unable to recover it. 00:31:08.959 [2024-11-25 13:05:48.801148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.959 [2024-11-25 13:05:48.801178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.801587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.801617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.801843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.801881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.802259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.802288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.802686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.802715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.802965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.802999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.803398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.803426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.803793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.803822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.804254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.804284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.804696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.804725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.805073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.805104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.805454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.805483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.805853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.805895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.806230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.806259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.806631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.806659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.807019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.807050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.807391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.807420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.807685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.807715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.807976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.808007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.808359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.808388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.808741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.808769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.809120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.809153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.809526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.809555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.809911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.809942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.810301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.810331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.810587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.810616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.811018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.811049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.811414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.811443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.811686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.811717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.960 qpair failed and we were unable to recover it. 00:31:08.960 [2024-11-25 13:05:48.812009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.960 [2024-11-25 13:05:48.812040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.812428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.812463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.812845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.812895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.813253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.813282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.813649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.813678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.814065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.814097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.814463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.814493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.814765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.814795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.815130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.815161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.815391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.815424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.815701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.815729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.816175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.816206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.816444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.816473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.816838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.816876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.817143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.817175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.817305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.817334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.817784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.817813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.818056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.818089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.818509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.818539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.818921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.818952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.819200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.819229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.819580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.819609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.819958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.819990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.820343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.820372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.820780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.820809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.821167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.821200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.821571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.821600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.821959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.821990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.822372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.822407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.961 [2024-11-25 13:05:48.822786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.961 [2024-11-25 13:05:48.822816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.961 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-25 13:05:48.823180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-25 13:05:48.823212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-25 13:05:48.823574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-25 13:05:48.823603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-25 13:05:48.823988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-25 13:05:48.824020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-25 13:05:48.824373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-25 13:05:48.824403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-25 13:05:48.824774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-25 13:05:48.824804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-25 13:05:48.825045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-25 13:05:48.825079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-25 13:05:48.825453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-25 13:05:48.825483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-25 13:05:48.825894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-25 13:05:48.825926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:08.962 [2024-11-25 13:05:48.826305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.962 [2024-11-25 13:05:48.826335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:08.962 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.826682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.826713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.827012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.827042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.827413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.827444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.827894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.827926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.828315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.828345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.828725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.828755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.829153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.829184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.829538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.829569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.829959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.829990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.830359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.830389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.830744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.830774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.831144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.831174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.831518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.831548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.831895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.831927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.832206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.832238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.832602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.832632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.832983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.833015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.833390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.833419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.833804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.833834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.834203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.834233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.834587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.834617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.834979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.835010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.835359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.835388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.835757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.835787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.293 qpair failed and we were unable to recover it. 00:31:09.293 [2024-11-25 13:05:48.836134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.293 [2024-11-25 13:05:48.836165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.836539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.836570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.836946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.836979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.837371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.837401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.837665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.837695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.838019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.838058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.838440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.838471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.838829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.838859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.839114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.839148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.839402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.839431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.839844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.839889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.840289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.840319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.840667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.840697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.841070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.841101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.841454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.841483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.841701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.841731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.842180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.842211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.842585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.842614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.842972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.843003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.843252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.843282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.843730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.843759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.844109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.844140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.844393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.844425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.844809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.844838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.845092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.845123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.845496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.845525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.845763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.845792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.846206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.846238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.846595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.846625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.847010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.847041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.847407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.847436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.847667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.847700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.848135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.848166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.848548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.848577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.848931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.848964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.849352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.849381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.849612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.294 [2024-11-25 13:05:48.849642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.294 qpair failed and we were unable to recover it. 00:31:09.294 [2024-11-25 13:05:48.850046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.850078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.850440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.850469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.850819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.850849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.851217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.851247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.851611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.851641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.851988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.852020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.852394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.852424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.852800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.852829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.853215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.853252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.853627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.853656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.854011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.854043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.854412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.854441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.854816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.854847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.855202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.855234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.855580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.855609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.855969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.856000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.856362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.856392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.856792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.856821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.857210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.857243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.857609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.857639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.858017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.858048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.858410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.858440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.858789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.858819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.859248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.859278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.859652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.859681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.859897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.859928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.860311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.860340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.860722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.860751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.861035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.861066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.861289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.861318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.861707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.861737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.862093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.862124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.862501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.862530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.862889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.862920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.863334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.863363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.863610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.863641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.864021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.864053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.864407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.295 [2024-11-25 13:05:48.864437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.295 qpair failed and we were unable to recover it. 00:31:09.295 [2024-11-25 13:05:48.864791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.864820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.865207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.865238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.865600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.865628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.865858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.865923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.866048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:09.296 [2024-11-25 13:05:48.866226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.866256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.866635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.866665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.867051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.867082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.867456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.867485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.867882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.867914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.868337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.868366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.868784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.868813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.869211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.869241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.869480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.869508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.869794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.869822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.870272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.870303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.870673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.870703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.871062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.871093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.871335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.871363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.871535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.871565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.872015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.872046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.872197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.872229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.872596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.872625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.873003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.873034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.873400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.873436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.873769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.873799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.874161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.874193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.874571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.874601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.874961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.874993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.875381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.875411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.875876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.875907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.876284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.876312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.876662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.876691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.877049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.877080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.877440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.877468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.877895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.877928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.878277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.878306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.878768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.296 [2024-11-25 13:05:48.878798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.296 qpair failed and we were unable to recover it. 00:31:09.296 [2024-11-25 13:05:48.879235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.879267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.879648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.879677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.879918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.879952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.880379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.880409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.880759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.880788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.881180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.881210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.881587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.881616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.881975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.882005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.882383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.882412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.882835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.882874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.883227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.883257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.883509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.883540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.883924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.883954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.884337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.884366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.884797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.884826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.885191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.885222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.885569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.885598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.885953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.885983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.886358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.886386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.886643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.886672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.887040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.887069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.887449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.887478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.887842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.887884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.888265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.888293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.888662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.888691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.888939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.888970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.889379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.889414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.889796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.889825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.890184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.890215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.890577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.890606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.890950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.890980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.891101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.891129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.297 qpair failed and we were unable to recover it. 00:31:09.297 [2024-11-25 13:05:48.891475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.297 [2024-11-25 13:05:48.891504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.891882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.891912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.892134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.892163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.892411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.892442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.892845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.892886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.893108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.893137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.893487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.893516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.893894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.893926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.894295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.894326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.894682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.894711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.895063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.895093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.895455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.895485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.895877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.895908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.896109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.896138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.896529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.896558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.896950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.896981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.897361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.897390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.897760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.897789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.898159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.898190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.898557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.898586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.898963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.898993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.899259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.899291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.899649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.899678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.900033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.900064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.900436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.900465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.900707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.900736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.901116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.901147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.901522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.901552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.901828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.901857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.902235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.902266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.902669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.902697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.903053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.903085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.903434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.903463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.903822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.903850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.904234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.904273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.904656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.904686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.905051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.905081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.298 qpair failed and we were unable to recover it. 00:31:09.298 [2024-11-25 13:05:48.905426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.298 [2024-11-25 13:05:48.905455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.905808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.905837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.906109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.906139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.906389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.906420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.906685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.906715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.907103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.907135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.907553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.907583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.907940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.907970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.908331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.908360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.908709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.908738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.909101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.909131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.909387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.909421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.909805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.909834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.910187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.910216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.910586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.910616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.910990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.911022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.911383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.911412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.911766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.911795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.912149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.912180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.912547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.912576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.912925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.912955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.913343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.913372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.913686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.913715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.913975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.914008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.914386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.914416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.914766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.914799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.915182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.915215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.915579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.915608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.915966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.915996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.916351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.916382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.916741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.916772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.917130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.917161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.917518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.917547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.917903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.917934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.918358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.918387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.918447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.299 [2024-11-25 13:05:48.918493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.299 [2024-11-25 13:05:48.918501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.299 [2024-11-25 13:05:48.918508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.299 [2024-11-25 13:05:48.918516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.299 [2024-11-25 13:05:48.918620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.918659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.299 qpair failed and we were unable to recover it. 00:31:09.299 [2024-11-25 13:05:48.919059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.299 [2024-11-25 13:05:48.919091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.919451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.919480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.919753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.919783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.920140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.920172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.920594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.920623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.920591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:09.300 [2024-11-25 13:05:48.920762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:09.300 [2024-11-25 13:05:48.920941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:09.300 [2024-11-25 13:05:48.920996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.921026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.920940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:09.300 [2024-11-25 13:05:48.921305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.921339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.921716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.921745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.922188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.922219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.922584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.922614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.923009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.923041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.923483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.923511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.923900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.923931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.924369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.924400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.924777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.924806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.925174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.925204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.925567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.925597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.925970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.926002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.926359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.926389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.926783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.926813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.927102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.927133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.927497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.927526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.927912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.927945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.928343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.928373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.928606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.928635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.929008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.929039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.929474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.929504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.929849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.929902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.930152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.930182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.930567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.930596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.930966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.930997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.931273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.931301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.931675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.931705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.931970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.932004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.932366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.932396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.932763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.932793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.933216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.300 [2024-11-25 13:05:48.933247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.300 qpair failed and we were unable to recover it. 00:31:09.300 [2024-11-25 13:05:48.933617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.933646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.934031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.934069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.934306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.934334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.934571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.934600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.935003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.935033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.935386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.935414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.935773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.935801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.936177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.936207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.936591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.936619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.936973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.937003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.937273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.937306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.937548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.937577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.937832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.937860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.938126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.938155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.938532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.938561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.938959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.938990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.939359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.939387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.939760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.939789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.940162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.940193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.940583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.940612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.940720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.940748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.941113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.941143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.941536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.941565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.941960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.941991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.942368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.942396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.942757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.942786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.943029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.943060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.943359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.943388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.943834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.943875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.944218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.944247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.944648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.944677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.945071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.945102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.945343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.945372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.945735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.945764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.946123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.946154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.946397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.946426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.946808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.946837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.301 qpair failed and we were unable to recover it. 00:31:09.301 [2024-11-25 13:05:48.946999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.301 [2024-11-25 13:05:48.947029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.947393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.947422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.947843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.947881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.948252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.948281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.948635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.948670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.948764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.948791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.949201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.949233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.949460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.949489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.949879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.949910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.950288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.950317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.950433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.950465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.950706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.950735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.950982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.951015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.951286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.951318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.951689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.951719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.952084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.952116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.952358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.952387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.952750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.952781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.953059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.953090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.953424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.953454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.953822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.953852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.954035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.954066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.954282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.954314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.954695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.954724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.955144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.955175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.955514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.955544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.955793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.955822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.956218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.956248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.956516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.956546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.956921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.956952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.957332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.957362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.957745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.957776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.957999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.958030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.302 [2024-11-25 13:05:48.958411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.302 [2024-11-25 13:05:48.958440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.302 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.958697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.958727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.958984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.959018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.959248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.959277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.959655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.959684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.960055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.960085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.960340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.960369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.960844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.960887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.961279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.961309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.961670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.961699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.962061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.962094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.962361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.962398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.962750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.962779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.963026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.963058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.963477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.963508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.963877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.963909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.964143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.964175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.964440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.964469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.964858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.964899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.965311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.965340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.965716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.965744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.966109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.966140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.966518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.966547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.966908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.966938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.967226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.967255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.967537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.967569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.967966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.967996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.968237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.968266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.968660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.968689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.968965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.968996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.969352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.969381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.969602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.969630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.969999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.970031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.970397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.970428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.970809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.970838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.971318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.971350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.971495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.971523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.971897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.971929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.303 [2024-11-25 13:05:48.972304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.303 [2024-11-25 13:05:48.972335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.303 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.972719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.972747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.972925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.972958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.973357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.973387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.973763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.973792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.974202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.974233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.974468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.974497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.974882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.974912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.975263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.975293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.975740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.975769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.976153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.976184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.976413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.976443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.976824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.976853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.977242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.977279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.977495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.977524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.977830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.977861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.978163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.978193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.978295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.978325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.978603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.978632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.979001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.979034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.979366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.979395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.979772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.979801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.980176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.980207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.980560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.980589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.980699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.980726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.980957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.980989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.981290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.981320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.981779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.981809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.982095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.982127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.982367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.982396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.982764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.982793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.983035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.983066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.983436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.983465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.983852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.983891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.984149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.984180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.984410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.984440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.984663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.984693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.985062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.985093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.304 [2024-11-25 13:05:48.985444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.304 [2024-11-25 13:05:48.985473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.304 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.985842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.985881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.986262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.986292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.986393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.986420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.986829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.986857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.987239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.987269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.987635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.987664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.987943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.987974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.988339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.988368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.988744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.988773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.989146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.989177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.989552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.989580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.989949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.989980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.990216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.990245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.990610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.990639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.991025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.991061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.991385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.991414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.991792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.991820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.992083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.992116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.992361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.992390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.992792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.992820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.993046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.993077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.993415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.993444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.993657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.993686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.994064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.994094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.994317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.994346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.994751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.994781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.995130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.995160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.995428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.995458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.995819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.995849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.996115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.996145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.996500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.996529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.996762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.996791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.997183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.997213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.997574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.997603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.997922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.997953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.998359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.998389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.998624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.998655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.305 [2024-11-25 13:05:48.998883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.305 [2024-11-25 13:05:48.998913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.305 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:48.999525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:48.999646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.000241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.000342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.000769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.000805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.001370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.001472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.001809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.001846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.002220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.002251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.002609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.002638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.003138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.003240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.003595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.003633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.003985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.004018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.004351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.004382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.004632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.004666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.004927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.004962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.005224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.005253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.005501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.005530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.005946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.005977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.006264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.006304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.006540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.006569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.006954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.006985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.007386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.007414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.007753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.007781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.008147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.008177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.008549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.008578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.008799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.008827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.009190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.009221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.009579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.009608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.009965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.009996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.010334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.010362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.010604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.010632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.010882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.010912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.011180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.011210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.011616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.011645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.011996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.012027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.012466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.012495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.012711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.012739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.012994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.013024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.013432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.013461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.306 [2024-11-25 13:05:49.013816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.306 [2024-11-25 13:05:49.013845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.306 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.014112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.014142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.014557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.014585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.014975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.015006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.015407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.015437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.015829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.015858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.016263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.016293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.016661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.016691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.017087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.017118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.017519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.017548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.017913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.017943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.018062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.018093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.018486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.018516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.018731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.018760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.019036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.019070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.019453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.019482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.019838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.019875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.020245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.020274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.020637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.020665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.020935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.020973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.021326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.021356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.021719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.021748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.022135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.022165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.022552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.022581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.022938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.022968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.023224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.023253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.023634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.023662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.024038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.024068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.024435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.024464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.024843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.024879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.024983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.025013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.025276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.025305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.025538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.025567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.307 [2024-11-25 13:05:49.025994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.307 [2024-11-25 13:05:49.026025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.307 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.026359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.026388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.026742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.026770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.027175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.027205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.027443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.027472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.027851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.027888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.028310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.028339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.028693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.028724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.028948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.028980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.029367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.029397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.029753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.029781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.030145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.030175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.030565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.030594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.030831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.030870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.031245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.031274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.031499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.031528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.031895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.031926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.032161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.032194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.032609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.032638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.032774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.032801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.033185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.033216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.033414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.033443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.033809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.033838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.034073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.034102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.034529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.034559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.034929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.034959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.035340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.035379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.035738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.035767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.035890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.035931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.036296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.036325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.036694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.036724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.037138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.037169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.037563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.037591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.037834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.037873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.038245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.038275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.038538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.038566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.038983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.039013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.039243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.039273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.308 [2024-11-25 13:05:49.039668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.308 [2024-11-25 13:05:49.039696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.308 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.039944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.039974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.040371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.040400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.040652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.040683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.041006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.041036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.041499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.041528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.041739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.041768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.042184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.042213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.042440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.042468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.042717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.042745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.043165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.043194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.043573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.043602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.043964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.043994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.044385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.044413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.044632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.044660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.044883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.044914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.045120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.045149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.045537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.045567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.045948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.045980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.046384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.046413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.046792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.046820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.047189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.047219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.047449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.047478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.047606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.047633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.047930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.047961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.048052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.048078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.048416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.048444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.048816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.048844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.048989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.049024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.049276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.049305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.049517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.049546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.049987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.050017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.050392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.050422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.050791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.050819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.051089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.051119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.051471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.051501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.051879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.051909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.052282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.309 [2024-11-25 13:05:49.052311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.309 qpair failed and we were unable to recover it. 00:31:09.309 [2024-11-25 13:05:49.052666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.052694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.053054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.053083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.053303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.053332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.053566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.053595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.053895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.053926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.054151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.054181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.054577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.054606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.054818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.054846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.055061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.055091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.055492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.055520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.055776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.055808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.056073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.056104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.056470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.056499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.056884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.056914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.057308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.057338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.057718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.057747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.058178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.058209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.058470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.058500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.058699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.058729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.058975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.059005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.059341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.059369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.059739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.059767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.059986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.060016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.060382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.060410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.060790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.060819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.061198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.061229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.061620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.061649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.062028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.062058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.062467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.062496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.062923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.062953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.063333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.063368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.063756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.063784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.064162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.064192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.064567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.064596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.064969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.065000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.065379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.065409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.065833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.065869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.066228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.310 [2024-11-25 13:05:49.066257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.310 qpair failed and we were unable to recover it. 00:31:09.310 [2024-11-25 13:05:49.066489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.066518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.066884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.066914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.067286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.067315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.067692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.067721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.068077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.068108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.068490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.068519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.068782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.068815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.069205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.069237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.069410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.069438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.069882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.069913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.070166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.070194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.070587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.070616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.071007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.071037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.071260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.071289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.071548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.071576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.071793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.071822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.072072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.072103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.072496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.072525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.072885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.072916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.073311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.073341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.073558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.073587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.074039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.074069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.074314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.074342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.074752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.074781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.075159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.075190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.075562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.075590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.076000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.076030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.076431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.076460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.076829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.076858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.077217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.077246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.077654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.077682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.077949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.077979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.311 qpair failed and we were unable to recover it. 00:31:09.311 [2024-11-25 13:05:49.078333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.311 [2024-11-25 13:05:49.078368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.078749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.078777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.078987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.079018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.079398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.079427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.079805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.079835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.080190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.080221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.080585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.080614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.080982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.081013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.081405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.081435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.081804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.081832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.082100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.082130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.082385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.082417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.082795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.082823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.083094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.083124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.083518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.083547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.083804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.083834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.084131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.084161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.084517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.084546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.084811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.084839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.085071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.085101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.085334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.085363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.085720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.085749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.086018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.086048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.086438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.086467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.086819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.086847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.087237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.087266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.087649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.087678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.088051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.088082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.088300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.088328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.088690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.088718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.089139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.089169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.089492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.089520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.089891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.089921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.090155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.090187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.090420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.090449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.090819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.090847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.091155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.091185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.091439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.091468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.312 qpair failed and we were unable to recover it. 00:31:09.312 [2024-11-25 13:05:49.091710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.312 [2024-11-25 13:05:49.091742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.091976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.092005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.092386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.092421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.092847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.092884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.093135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.093166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.093406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.093436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.093850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.093899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.094299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.094328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.094429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.094455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.094697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.094726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.095131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.095161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.095555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.095585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.095960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.095991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.096103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.096134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.096543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.096572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.096838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.096877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.097151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.097180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.097536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.097565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.097932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.097963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.098317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.098347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.098455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.098484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.098922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.098951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.099307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.099336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.099708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.099737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.100050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.100080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.100456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.100485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.100776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.100805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.101150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.101180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.101378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.101408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.101816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.101851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.102096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.102125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.102510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.102538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.102929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.102959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.103314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.103343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.103734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.103762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.104185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.104215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.104446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.104476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.104925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.104955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.313 qpair failed and we were unable to recover it. 00:31:09.313 [2024-11-25 13:05:49.105050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.313 [2024-11-25 13:05:49.105076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.105301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.105333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.105732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.105761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.106021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.106052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.106462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.106491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.106736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.106766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.107147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.107178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.107541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.107570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.107664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.107691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.108083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.108113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.108465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.108494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.108884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.108914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.109174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.109202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.109499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.109527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.109879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.109909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.110286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.110314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.110685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.110713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.110962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.110995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.111381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.111410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.111585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.111613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.112005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.112036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.112414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.112443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.112837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.112874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.113359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.113388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.113703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.113731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.114093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.114123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.114500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.114528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.114898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.114927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.115171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.115199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.115592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.115621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.115974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.116004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.116353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.116388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.116735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.116764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.117155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.117184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.117552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.117582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.117965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.117994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.118363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.118392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.118663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.118691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.314 qpair failed and we were unable to recover it. 00:31:09.314 [2024-11-25 13:05:49.118918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.314 [2024-11-25 13:05:49.118947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.119206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.119238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.119503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.119534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.119891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.119921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.120148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.120177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.120381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.120410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.120798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.120826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.121202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.121232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.121627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.121656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.121893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.121923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.122144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.122173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.122605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.122635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.123020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.123050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.123407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.123436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.123702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.123731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.123963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.123993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.124379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.124407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.124702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.124734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.125095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.125125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.125539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.125569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.125934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.125965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.126343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.126373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.126806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.126836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.127189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.127221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.127594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.127622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.128025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.128056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.128422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.128450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.128675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.128703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.129092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.129122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.129488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.129516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.129759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.129787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.130168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.130200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.130591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.130619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.130992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.131028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.131446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.131475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.131712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.131741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.132006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.132039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.132417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.132446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.315 qpair failed and we were unable to recover it. 00:31:09.315 [2024-11-25 13:05:49.132714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.315 [2024-11-25 13:05:49.132742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.133109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.133139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.133515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.133545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.133897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.133927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.134308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.134337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.134700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.134728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.135017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.135047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.135433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.135462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.135839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.135875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.136131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.136160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.136530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.136559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.136876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.136906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.137245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.137273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.137671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.137700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.138072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.138102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.138469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.138498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.138946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.138976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.139344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.139372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.139614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.139647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.140006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.140036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.140269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.140300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.140644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.140672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.141034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.141064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.141299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.141328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.141580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.141610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.141951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.141980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.142365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.142393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.142786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.142814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.143209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.143239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.143606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.143635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.144049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.144079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.144332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.144361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.144589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.144617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.144959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.316 [2024-11-25 13:05:49.144989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.316 qpair failed and we were unable to recover it. 00:31:09.316 [2024-11-25 13:05:49.145365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.145394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.145776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.145812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.146207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.146236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.146601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.146630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.147007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.147038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.147381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.147409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.147630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.147660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.148032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.148062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.148416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.148444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.148747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.148776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.149164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.149194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.149443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.149474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.149859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.149896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.150178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.150206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.150589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.150617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.150881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.150912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.151131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.151159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.151564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.151592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.151958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.151988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.152225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.152253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.152636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.152665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.152914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.152944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.153313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.153341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.153705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.153733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.153973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.154006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.154230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.154258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.154643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.154671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.154901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.154931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.155150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.155179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.155556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.155585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.155797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.155826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.156189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.156219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.156419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.156448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.156838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.156873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.157227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.157256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.157609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.157639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.157771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.157800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.157988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.317 [2024-11-25 13:05:49.158016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.317 qpair failed and we were unable to recover it. 00:31:09.317 [2024-11-25 13:05:49.158223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.158251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.158630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.158659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.158999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.159029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.159424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.159462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.159806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.159834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.160219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.160249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.160620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.160648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.161067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.161097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.161315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.161344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.161685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.161713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.162104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.162134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.162556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.162584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.162976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.163007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.163379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.163407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.163717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.163746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.164105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.164134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.164357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.164386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.164798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.164827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.165085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.165115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.165455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.165484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.165870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.165900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.166147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.166178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.166538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.166567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.166915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.166946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.167331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.167360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.167601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.167628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.168000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.168029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.168422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.168451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.168786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.168813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.169196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.169226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.169602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.169631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.169874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.169906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.170264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.170293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.170681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.170710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.171090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.171120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.171488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.171516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.171735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.318 [2024-11-25 13:05:49.171762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.318 qpair failed and we were unable to recover it. 00:31:09.318 [2024-11-25 13:05:49.171973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.172002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.172393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.172421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.172754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.172781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.173170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.173199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.173570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.173599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.174022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.174051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.174435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.174469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.174680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.174710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.175040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.175070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.175348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.175380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.175761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.175789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.176155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.176185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.176545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.176574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.176889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.176917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.177148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.177177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.177512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.177541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.177911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.177940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.178342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.178371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.319 [2024-11-25 13:05:49.178592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.319 [2024-11-25 13:05:49.178620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.319 qpair failed and we were unable to recover it. 00:31:09.615 [2024-11-25 13:05:49.179035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.615 [2024-11-25 13:05:49.179068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.615 qpair failed and we were unable to recover it. 00:31:09.615 [2024-11-25 13:05:49.179468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.615 [2024-11-25 13:05:49.179500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.615 qpair failed and we were unable to recover it. 00:31:09.615 [2024-11-25 13:05:49.179876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.615 [2024-11-25 13:05:49.179906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.615 qpair failed and we were unable to recover it. 00:31:09.615 [2024-11-25 13:05:49.180306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.615 [2024-11-25 13:05:49.180335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.615 qpair failed and we were unable to recover it. 00:31:09.615 [2024-11-25 13:05:49.180705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.615 [2024-11-25 13:05:49.180734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.181102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.181131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.181512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.181540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.181796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.181828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.182273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.182304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.182672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.182701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.183081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.183111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.183213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.183242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.183519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.183551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.183922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.183952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.184217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.184249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.184643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.184671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.185050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.185080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.185450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.185479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.185832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.185859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.186298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.186327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.186707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.186735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.186963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.186993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.187367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.187396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.187620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.187649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.187900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.187929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.188262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.188291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.188676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.188704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.189086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.189122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.189505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.189535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.189903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.189934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.190183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.190212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.190425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.190453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.190712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.190740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.190994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.191031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.191385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.191414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.191843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.191879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.192173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.192201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.192541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.192569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.192928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.192958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.193219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.193251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.193621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.193651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.616 [2024-11-25 13:05:49.194029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.616 [2024-11-25 13:05:49.194059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.616 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.194315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.194345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.194570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.194599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.194985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.195015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.195245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.195273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.195551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.195583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.195969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.195999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.196163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.196191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.196594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.196623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.196839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.196873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.197233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.197262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.197481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.197510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.197840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.197876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.198280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.198309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.198533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.198561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.198912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.198941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.199356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.199385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.199621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.199649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.200057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.200087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.200472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.200500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.200598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.200625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.201016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.201046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.201263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.201291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.201658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.201686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.202051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.202081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.202318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.202346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.202694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.202728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.202983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.203013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.203237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.203265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.203649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.203678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.204072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.204102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.204473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.204501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.204742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.204770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.205157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.205186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.205512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.205541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.205759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.205789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.206187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.206217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.206586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.206615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.617 qpair failed and we were unable to recover it. 00:31:09.617 [2024-11-25 13:05:49.206932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.617 [2024-11-25 13:05:49.206962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.207307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.207335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.207521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.207553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.207954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.207984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.208374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.208402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.208769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.208797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.209091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.209121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.209368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.209397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.209776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.209805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.210167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.210196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.210583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.210611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.210969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.210999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.211280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.211309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.211576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.211605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.212006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.212035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.212282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.212316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.212714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.212743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.213144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.213173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.213541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.213569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.213812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.213840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.214181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.214209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.214627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.214655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.215047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.215078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.215429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.215458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.215700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.215729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.216095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.216125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.216506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.216535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.216913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.216943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.217354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.217389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.217742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.217771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.217888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.217919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.218300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.218329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.218712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.218741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.218950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.218980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.219256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.219284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.219637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.219666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.219917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.219948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.618 [2024-11-25 13:05:49.220322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.618 [2024-11-25 13:05:49.220350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.618 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.220673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.220702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.220929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.220958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.221193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.221221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.221598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.221627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.221925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.221955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.222163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.222192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.222579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.222608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.222860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.222901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.223293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.223322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.223669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.223697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.224068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.224098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.224483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.224512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.224752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.224780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.225001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.225031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.225289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.225320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.225671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.225699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.225936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.225968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.226074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.226103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.226330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.226358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.226759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.226788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.227023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.227056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.227422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.227451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.227803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.227832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.227994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.228024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.228314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.228342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.228710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.228739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.229097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.229127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.229363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.229391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.229730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.229758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.230110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.230140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.230520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.230555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.230727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.230756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.231135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.231167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.231396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.231425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.231822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.231852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.232105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.232135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.232504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.232534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.619 [2024-11-25 13:05:49.232915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.619 [2024-11-25 13:05:49.232946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.619 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.233204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.233232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.233422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.233452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.233879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.233910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.234266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.234295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.234662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.234691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.234910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.234941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.235314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.235344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.235699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.235728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.236138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.236168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.236280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.236310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.236558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.236588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.236807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.236836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.237133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.237162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.237556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.237584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.237825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.237854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.238242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.238271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.238492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.238521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.238759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.238789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.239167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.239197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.239562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.239593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.239957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.239987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.240350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.240379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.240604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.240633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.240884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.240915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.241291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.241320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.241589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.241618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.242033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.242063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.242301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.242330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.242576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.242605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.242841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.242893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.243239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.243267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.243545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.620 [2024-11-25 13:05:49.243573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.620 qpair failed and we were unable to recover it. 00:31:09.620 [2024-11-25 13:05:49.243817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.243854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.244284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.244314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.244689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.244718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.245169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.245201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.245452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.245481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.245717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.245746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.245998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.246028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.246347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.246376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.246614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.246645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.247057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.247087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.247464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.247492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.247910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.247940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.248310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.248339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.248716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.248745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.249170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.249200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.249561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.249590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.249684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.249710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.249982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.250011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.250194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.250222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.250648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.250677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.250900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.250930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.251408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.251436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.251771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.251800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.251933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.251966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.252348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.252378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.252801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.252830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.253195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.253225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.253595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.253625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.253852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.253887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.254126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.254156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.254515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.254543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.254805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.254833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.255228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.255258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.255635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.255664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.255881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.255911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.256152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.256181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.256562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.256590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.256969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.621 [2024-11-25 13:05:49.256999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.621 qpair failed and we were unable to recover it. 00:31:09.621 [2024-11-25 13:05:49.257348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.257376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.257769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.257797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.258180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.258216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.258588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.258616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.258980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.259010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.259246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.259275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.259668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.259696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.259824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.259850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.260258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.260287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.260632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.260661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.261055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.261085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.261487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.261515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.261736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.261764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.262145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.262174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.262507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.262536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.262773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.262802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.263049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.263080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.263432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.263461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.263842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.263878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.264098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.264128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.264472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.264501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.264878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.264909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.265151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.265180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.265463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.265491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.265586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.265613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Write completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Write completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Write completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Write completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Write completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Write completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Write completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Write completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Write completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Write completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 Read completed with error (sct=0, sc=8) 00:31:09.622 starting I/O failed 00:31:09.622 [2024-11-25 13:05:49.266007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.622 [2024-11-25 13:05:49.266464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.266483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.622 [2024-11-25 13:05:49.266829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.622 [2024-11-25 13:05:49.266840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.622 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.267036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.267048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.267308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.267318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.267635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.267646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.267970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.267982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.268178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.268189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.268398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.268409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.268751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.268762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.268975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.268987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.269231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.269241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.269476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.269487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.269682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.269692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.270018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.270028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.270374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.270386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.270727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.270738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.270929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.270941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.271153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.271163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.271372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.271382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.271693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.271703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.272032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.272043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.272377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.272387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.272578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.272589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.272948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.272959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.273172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.273185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.273519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.273530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.273833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.273843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.274159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.274170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.274543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.274553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.274786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.274797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.275166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.275177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.275514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.275524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.275855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.275874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.276205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.276215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.276564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.276574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.276768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.276779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.277119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.277130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.277491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.277501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.277814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.277824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.278022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.278035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.623 qpair failed and we were unable to recover it. 00:31:09.623 [2024-11-25 13:05:49.278243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.623 [2024-11-25 13:05:49.278254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.278440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.278460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.278799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.278809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.279190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.279201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.279510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.279520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.279871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.279882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.280098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.280109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.280454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.280464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.280805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.280815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.281028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.281041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.281262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.281273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.281514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.281525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.281827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.281839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.282052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.282064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.282390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.282400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.282812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.282823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.283152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.283163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.283575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.283585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.283777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.283788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.284121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.284132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.284547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.284557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.284875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.284886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.285227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.285237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.285559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.285570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.285765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.285776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.286020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.286034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.286344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.286354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.286687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.286697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.287035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.287046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.287398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.287408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.287720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.287730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.288047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.288058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.288403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.288415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.288769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.288780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.289137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.289148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.289506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.289517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.289694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.289706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.289885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.289896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.624 [2024-11-25 13:05:49.290202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.624 [2024-11-25 13:05:49.290212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.624 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.290547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.290557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.290905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.290916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.291235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.291245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.291658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.291668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.291891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.291903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.292241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.292251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.292447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.292457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.292826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.292836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.293054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.293065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.293143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.293154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.293338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.293349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.293690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.293700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.294110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.294121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.294430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.294443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.294840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.294850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.295137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.295147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.295317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.295327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.295614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.295625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.296014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.296025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.296255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.296266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.296683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.296693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.297089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.297100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.297506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.297516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.297832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.297842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.298190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.298201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.298544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.298554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.298871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.298883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.299088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.299098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.299437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.299449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.299781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.299792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.300164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.300175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.300341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.300351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.300535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.300546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.625 [2024-11-25 13:05:49.300802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.625 [2024-11-25 13:05:49.300812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.625 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.301047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.301058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.301264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.301274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.301459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.301470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.301774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.301784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.302112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.302122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.302466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.302476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.302669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.302679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.302878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.302889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.303232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.303242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.303570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.303580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.303626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.303635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.303831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.303842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.304180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.304190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.304237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.304246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.304405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.304416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.304607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.304618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.304854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.304873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.305205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.305215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.305553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.305563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.305767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.305777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.306015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.306032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.306450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.306460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.306676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.306687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.306875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.306886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.307080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.307089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.307433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.307443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.307758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.307768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.307980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.307992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.308247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.308257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.308600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.308610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.308811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.308821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.309239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.309250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.309435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.309447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.309877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.309887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.310268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.310278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.310619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.310629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.310908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.310920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.311217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.311227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.626 qpair failed and we were unable to recover it. 00:31:09.626 [2024-11-25 13:05:49.311449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.626 [2024-11-25 13:05:49.311460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.311659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.311669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.312019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.312030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.312348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.312358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.312533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.312543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.312858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.312882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.313203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.313214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.313404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.313416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.313745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.313757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.314096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.314111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.314450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.314461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.314776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.314786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.315109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.315122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.315433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.315442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.315782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.315793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.316220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.316233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.316547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.316559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.316880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.316892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.317092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.317103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.317454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.317465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.317779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.317789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.318119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.318130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.318362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.318373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.318477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.318486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.318849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.318860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.319053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.319064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.319266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.319277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.319525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.319537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.319584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.319593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.319888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.319899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.320200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.320210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.320545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.320555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.320902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.320913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.321245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.321255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.321586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.321597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.321928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.321940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.322140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.322152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.322335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.322346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.322647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.627 [2024-11-25 13:05:49.322659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.627 qpair failed and we were unable to recover it. 00:31:09.627 [2024-11-25 13:05:49.322908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.322919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.323126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.323137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.323503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.323514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.323821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.323832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.324140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.324151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.324352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.324362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.324743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.324754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.324942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.324954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.325274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.325286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.325493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.325504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.325559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.325570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.325899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.325914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.326228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.326239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.326563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.326574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.326913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.326924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.327256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.327266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.327638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.327649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.328004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.328015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.328337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.328347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.328755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.328767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.329087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.329098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.329432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.329444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.329774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.329785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.330121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.330132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.330444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.330456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.330798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.330809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.331123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.331134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.331345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.331357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.331697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.331707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.332045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.332056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.332388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.332399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.332749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.332760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.333111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.333123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.333182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.333193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.333362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.333373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.333734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.333744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.334087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.334098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.334271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.334281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.628 qpair failed and we were unable to recover it. 00:31:09.628 [2024-11-25 13:05:49.334490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.628 [2024-11-25 13:05:49.334500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.334669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.334680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.334998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.335008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.335262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.335274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.335488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.335498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.335820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.335831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.336091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.336102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.336400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.336410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.336735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.336745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.337043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.337054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.337360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.337370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.337733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.337743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.338042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.338054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.338259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.338271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.338620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.338632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.338953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.338964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.339299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.339311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.339644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.339656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.339842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.339853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.340077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.340089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.340287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.340300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.340608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.340620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.340930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.340942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.341194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.341206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.341496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.341508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.341875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.341887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.342289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.342301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.342627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.342638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.343038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.343049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.343363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.343374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.343574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.343586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.343780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.343790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.344110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.344121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.344450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.344460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.629 [2024-11-25 13:05:49.344772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.629 [2024-11-25 13:05:49.344783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.629 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.345075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.345087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.345419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.345430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.345779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.345791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.346127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.346139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.346476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.346487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.346790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.346800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.347187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.347201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.347391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.347402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.347741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.347752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.348084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.348095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.348432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.348442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.348762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.348772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.349016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.349028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.349369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.349380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.349711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.349723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.350046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.350058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.350396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.350406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.350739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.350751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.350949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.350962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.351318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.351328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.351501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.351512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.351794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.351805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.352120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.352131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.352477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.352487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.352698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.352709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.352944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.352955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.353346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.353357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.353700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.353712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.354043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.354055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.354425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.354435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.354632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.354642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.355002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.355012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.355331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.355342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.355683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.355693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.356116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.356127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.356430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.356440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.356624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.356636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.356970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.630 [2024-11-25 13:05:49.356981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.630 qpair failed and we were unable to recover it. 00:31:09.630 [2024-11-25 13:05:49.357304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.357314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.357650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.357661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.358071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.358082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.358275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.358287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.358621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.358632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.358927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.358941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.359357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.359372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.359671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.359682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.360000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.360012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.360365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.360379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.360738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.360749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.361084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.361095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.361266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.361277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.361570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.361580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.361753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.361765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.362058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.362069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.362399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.362409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.362746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.362756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.363079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.363089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.363397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.363407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.363618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.363628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.363976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.363988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.364333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.364344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.364690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.364701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.365031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.365042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.365354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.365364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.365772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.365782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.366206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.366216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.366528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.366538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.366877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.366887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.367063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.367074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.367351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.367362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.367683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.367693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.368025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.368035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.368365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.368376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.368687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.368696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.369116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.369133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.369462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.369473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.631 qpair failed and we were unable to recover it. 00:31:09.631 [2024-11-25 13:05:49.369829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.631 [2024-11-25 13:05:49.369839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.370062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.370076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.370261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.370271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.370604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.370615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.370957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.370968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.371280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.371290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.371738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.371748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.372050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.372061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.372281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.372291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.372647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.372658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.372990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.373002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.373341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.373351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.373543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.373556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.373886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.373898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.374229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.374239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.374570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.374580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.374889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.374900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.375232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.375242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.375293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.375302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.375586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.375596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.375935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.375945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.376122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.376132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.376439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.376450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.376784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.376796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.377157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.377167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.377562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.377572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.377892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.377903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.378087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.378097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.378442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.378452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.378756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.378766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.378956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.378968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.379336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.379346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.379674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.379684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.379867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.379878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.380224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.380234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.380571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.380581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.380773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.380784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.381055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.381066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.381324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.632 [2024-11-25 13:05:49.381335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.632 qpair failed and we were unable to recover it. 00:31:09.632 [2024-11-25 13:05:49.381519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.381533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.381843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.381855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.382208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.382220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.382559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.382569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.382876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.382887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.383230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.383241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.383560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.383571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.383872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.383885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.383932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.383941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.384207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.384218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.384524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.384534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.384726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.384746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.384933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.384945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.385151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.385163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.385333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.385343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.385548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.385560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.385778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.385788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.386010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.386021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.386195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.386216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.386604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.386615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.386924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.386934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.387149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.387159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.387452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.387462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.387653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.387665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.387850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.387860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.388155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.388165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.388533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.388544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.388751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.388765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.389084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.389095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.389443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.389453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.389803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.389814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.390206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.390217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.390520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.390530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.390694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.390705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.391005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.391016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.391228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.391239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.391585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.391596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.391814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.391825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.633 [2024-11-25 13:05:49.392183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.633 [2024-11-25 13:05:49.392194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.633 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.392542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.392552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.392886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.392897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.393201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.393211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.393402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.393412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.393636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.393645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.393820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.393831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.394062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.394073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.394387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.394397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.394619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.394629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.394973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.394985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.395327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.395340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.395685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.395696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.396004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.396014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.396345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.396355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.396761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.396771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.397116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.397127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.397432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.397442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.397745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.397755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.398107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.398120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.398349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.398360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.398710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.398721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.398917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.398928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.399288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.399299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.399475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.399485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.399921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.399931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.400102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.400111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.400331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.400341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.400668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.400680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.400985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.400995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.401165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.401179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.401486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.401496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.401829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.401841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.402148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.402161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.402490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.634 [2024-11-25 13:05:49.402501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.634 qpair failed and we were unable to recover it. 00:31:09.634 [2024-11-25 13:05:49.402843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.402852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.403159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.403169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.403365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.403377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.403742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.403753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.404087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.404099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.404424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.404435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.404628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.404638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.404938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.404949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.405278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.405288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.405609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.405621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.405944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.405954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.406004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.406013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.406290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.406300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.406624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.406634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.406827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.406846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.407252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.407263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.407457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.407469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.407671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.407682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.407999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.408011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.408194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.408204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.408585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.408596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.409008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.409020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.409383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.409396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.409713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.409723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.410024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.410035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.410374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.410386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.410517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.410527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.410842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.410853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.411165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.411177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.411520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.411532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.411871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.411883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.412207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.412218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.412524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.412535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.412786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.412796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.413116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.413128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.413454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.413464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.413636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.413648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.413848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.413858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.635 qpair failed and we were unable to recover it. 00:31:09.635 [2024-11-25 13:05:49.414194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.635 [2024-11-25 13:05:49.414204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.414436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.414446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.414745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.414755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.414811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.414821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.414932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.414942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.415288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.415299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.415486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.415497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.415872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.415885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.416172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.416183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.416370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.416387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.416594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.416607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.416800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.416812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.417111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.417123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.417525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.417535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.417719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.417730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.417927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.417938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.418108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.418118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.418441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.418451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.418656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.418666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.418844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.418853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.419157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.419167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.419346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.419364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.419663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.419673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.420070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.420081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.420357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.420367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.420712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.420726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.421044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.421056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.421404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.421414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.421815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.421825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.422130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.422140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.422446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.422457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.422807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.422819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.423143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.423153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.423375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.423386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.423572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.423582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.423774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.423785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.424122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.424133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.424333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.424343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.424730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.636 [2024-11-25 13:05:49.424740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.636 qpair failed and we were unable to recover it. 00:31:09.636 [2024-11-25 13:05:49.424916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.424926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.425300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.425310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.425611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.425621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.425936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.425947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.426139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.426149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.426494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.426504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.426903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.426913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.427219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.427228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.427374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.427385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.427698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.427709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.427871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.427883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.428194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.428203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.428537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.428547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.428868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.428879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.429110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.429121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.429366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.429376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.429679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.429689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.430073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.430084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.430413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.430423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.430607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.430619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.430824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.430834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.431159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.431170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.431345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.431356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.431746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.431756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.432098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.432109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.432284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.432293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.432548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.432558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.432750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.432761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.433004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.433014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.433338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.433349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.433617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.433628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.433817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.433828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.434152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.434163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.434339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.434350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.434743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.434753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.435069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.435079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.435424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.435434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.435740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.637 [2024-11-25 13:05:49.435750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.637 qpair failed and we were unable to recover it. 00:31:09.637 [2024-11-25 13:05:49.436081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.436091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.436422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.436432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.436854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.436868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.437180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.437191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.437504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.437515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.437868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.437881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.438181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.438192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.438393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.438404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.438633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.438644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.438844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.438853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.439058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.439068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.439388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.439399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.439731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.439741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.440050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.440061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.440363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.440373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.440716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.440727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.441087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.441100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.441293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.441304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.441610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.441619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.441909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.441919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.442246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.442256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.442447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.442458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.442644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.442654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.442931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.442941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.443116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.443126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.443298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.443308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.443637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.443647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.443963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.443974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.444326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.444336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.444637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.444647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.444976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.444986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.445244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.445255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.445566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.445576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.445773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.445784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.638 [2024-11-25 13:05:49.445953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.638 [2024-11-25 13:05:49.445963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.638 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.446295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.446305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.446478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.446489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.446835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.446845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.447228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.447238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.447630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.447640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.448003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.448013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.448348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.448358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.448530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.448540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.448911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.448923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.449227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.449238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.449535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.449548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.449895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.449907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.450139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.450149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.450453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.450463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.450771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.450781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.451016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.451027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.451256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.451266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.451465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.451477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.451807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.451817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.452139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.452150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.452351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.452363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.452412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.452421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.452607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.452618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.452791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.452800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.453172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.453184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.453363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.453375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.453707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.453718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.453926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.453938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.454143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.454154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.454370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.454381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.454655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.454666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.454984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.454994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.455163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.455173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.455568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.455578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.455902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.455912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.456244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.456254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.456576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.456587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.639 [2024-11-25 13:05:49.456930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.639 [2024-11-25 13:05:49.456941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.639 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.457130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.457143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.457405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.457415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.457607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.457618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.457946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.457956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.458269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.458279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.458494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.458505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.458828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.458839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.459169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.459179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.459368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.459388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.459736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.459747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.459888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.459900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.460184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.460197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.460342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.460352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.460597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.460607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.460925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.460938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.461106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.461115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.461507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.461518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.461816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.461826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.462208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.462219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.462393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.462404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.462740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.462751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.463123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.463134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.463535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.463547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.463819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.463830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.464100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.464110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.464313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.464325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.464686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.464696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.464996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.465007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.465310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.465320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.465640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.465650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.465983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.465994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.466320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.466330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.466648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.466659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.466986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.466997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.467313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.467324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.467715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.467726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.468040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.468051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.468456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.640 [2024-11-25 13:05:49.468466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.640 qpair failed and we were unable to recover it. 00:31:09.640 [2024-11-25 13:05:49.468827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.468837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.469156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.469166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.469470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.469481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.469808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.469819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.470168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.470178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.470500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.470510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.470688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.470699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.471040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.471051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.471383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.471393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.471594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.471604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.472045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.472055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.472373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.472383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.472711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.472720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.473044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.473054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.473232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.473244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.473560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.473570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.474008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.474019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.474333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.474342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.474651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.474662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.475011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.475021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.475346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.475356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.475684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.475695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.476055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.476065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.476286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.476296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.476588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.476599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.476921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.476931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.477281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.477291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.477619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.477630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.477995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.478006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.478340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.478350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.478656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.478668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.478875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.478886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.479084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.479094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.479315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.479327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.479515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.479526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.479702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.479713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.480059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.480071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.480409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.480420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.641 [2024-11-25 13:05:49.480601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.641 [2024-11-25 13:05:49.480612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.641 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.480953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.480964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.481157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.481167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.481384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.481398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.481707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.481718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.482037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.482047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.482103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.482112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.482371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.482380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.482720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.482731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.483045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.483056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.483264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.483275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.483591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.483602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.483801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.483811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.484122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.484134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.484452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.484462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.484785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.484796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.484999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.485020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.485221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.485232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.485623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.485634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.485840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.485851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.486154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.486164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.486465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.486475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.486792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.486804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.487108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.487118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.487286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.487296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.487424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.487433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.487692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.487703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.488032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.488043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.488352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.488362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.488677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.488687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.489088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.489099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.489415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.489425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.489613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.489623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.489977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.489988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.490288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.490298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.490606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.490616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.490791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.490802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.491220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.491232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.491532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.491543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.491719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.491729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.491953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.491964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.642 qpair failed and we were unable to recover it. 00:31:09.642 [2024-11-25 13:05:49.492304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.642 [2024-11-25 13:05:49.492316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.492520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.492530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.492832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.492843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.493179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.493193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.493373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.493384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.493695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.493706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.494034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.494045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.494373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.494384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.494587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.494598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.494912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.494923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.495120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.495130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.495496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.495507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.495807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.495817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.496146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.496157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.496336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.496346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.496580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.496590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.496788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.496799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.497044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.497055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.497109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.497118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.497438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.497450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.497807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.497818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.498236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.498248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.498546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.498557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.498623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.498635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.498791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.498802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.499064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.499075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.499409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.499420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.499757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.499769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.500091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.500103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.500407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.500418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.500825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.500839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.501147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.501159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.501499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.501511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.501867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.501879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.502172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.502184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.502502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.502514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.502692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.502702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.502948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.643 [2024-11-25 13:05:49.502960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.643 qpair failed and we were unable to recover it. 00:31:09.643 [2024-11-25 13:05:49.503287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.503298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.503497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.503509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.503735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.503747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.503919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.503930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.504251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.504261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.504585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.504596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.504800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.504811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.505156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.505168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.505489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.505500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.505549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.505559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.505800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.505811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.506196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.506207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.506527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.506537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.506881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.506893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.507080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.507100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.507385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.507396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.507697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.507707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.508042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.644 [2024-11-25 13:05:49.508054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.644 qpair failed and we were unable to recover it. 00:31:09.644 [2024-11-25 13:05:49.508243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.916 [2024-11-25 13:05:49.508255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.916 qpair failed and we were unable to recover it. 00:31:09.916 [2024-11-25 13:05:49.508462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.916 [2024-11-25 13:05:49.508476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.916 qpair failed and we were unable to recover it. 00:31:09.916 [2024-11-25 13:05:49.508793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.916 [2024-11-25 13:05:49.508805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.916 qpair failed and we were unable to recover it. 00:31:09.916 [2024-11-25 13:05:49.509034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.916 [2024-11-25 13:05:49.509046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.916 qpair failed and we were unable to recover it. 00:31:09.916 [2024-11-25 13:05:49.509361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.916 [2024-11-25 13:05:49.509373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.916 qpair failed and we were unable to recover it. 00:31:09.916 [2024-11-25 13:05:49.509571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.916 [2024-11-25 13:05:49.509584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.916 qpair failed and we were unable to recover it. 00:31:09.916 [2024-11-25 13:05:49.509885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.916 [2024-11-25 13:05:49.509897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.916 qpair failed and we were unable to recover it. 00:31:09.916 [2024-11-25 13:05:49.510210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.916 [2024-11-25 13:05:49.510221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.916 qpair failed and we were unable to recover it. 00:31:09.916 [2024-11-25 13:05:49.510408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.916 [2024-11-25 13:05:49.510419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.510665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.510676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.511039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.511051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.511369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.511380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.511614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.511624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.511819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.511829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.512138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.512151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.512517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.512529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.512696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.512705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.513072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.513083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.513472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.513482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.513776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.513787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.514119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.514131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.514471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.514483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.514792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.514805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.515190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.515203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.515507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.515518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.515840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.515851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.516183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.516195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.516516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.516529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.516871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.516883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.517224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.517234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.517637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.517648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.517815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.517827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.518124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.518136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.518448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.518459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.518636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.518646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.917 [2024-11-25 13:05:49.519039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.917 [2024-11-25 13:05:49.519051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.917 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.519396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.519408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.519706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.519716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.519889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.519900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.520191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.520202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.520504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.520515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.520829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.520840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.521243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.521257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.521620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.521632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.521857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.521874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.522181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.522193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.522508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.522519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.522723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.522735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.522935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.522947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.523258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.523269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.523615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.523626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.524043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.524055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.524231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.524242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.524430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.524441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.524786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.524796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.524978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.524990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.525187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.525199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.525406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda5020 is same with the state(6) to be set 00:31:09.918 [2024-11-25 13:05:49.525952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.526002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.526340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.526371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22f4000b90 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.526742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.526756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.527073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.527085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.527296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.527308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.527487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.527496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.918 [2024-11-25 13:05:49.527572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.918 [2024-11-25 13:05:49.527582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.918 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.527913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.527925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.528113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.528124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.528416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.528427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.528476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.528485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.528677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.528687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.529119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.529130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.529309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.529320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.529655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.529666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.529917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.529929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.530278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.530290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.530589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.530600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.530801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.530812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.531120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.531131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.531439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.531450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.531848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.531859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.532171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.532182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.532484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.532496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.532816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.532828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.533203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.533214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.533518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.533530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.533750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.533761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.534085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.534099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.534411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.534422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.534647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.534657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.534840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.534852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.535104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.535115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.535509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.535519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.535645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.535656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.535990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.536001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.536329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.536341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.919 qpair failed and we were unable to recover it. 00:31:09.919 [2024-11-25 13:05:49.536689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.919 [2024-11-25 13:05:49.536700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.537050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.537062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.537242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.537258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.537633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.537644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.537843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.537853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.538228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.538239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.538506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.538517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.538700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.538710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.539036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.539047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.539375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.539386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.539698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.539709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.539900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.539913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.540059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.540069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.540381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.540391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.540722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.540732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.541081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.541092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.541392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.541403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.541704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.541714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.542034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.542045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.542288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.542299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.542619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.542630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.542939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.542950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.543273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.543285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.543484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.543495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.543776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.543787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.544105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.544116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.544431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.544441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.544618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.544628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.545022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.545033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.920 qpair failed and we were unable to recover it. 00:31:09.920 [2024-11-25 13:05:49.545375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.920 [2024-11-25 13:05:49.545385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.545728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.545738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.546129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.546139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.546307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.546318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.546676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.546686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.547006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.547017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.547242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.547252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.547590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.547600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.547945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.547955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.548236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.548245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.548566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.548576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.548882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.548893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.549083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.549093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.549403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.549414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.549732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.549748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.550086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.550098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.550296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.550307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.550641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.550651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.550986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.550996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.551207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.551217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.551479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.551488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.551805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.551815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.552012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.552022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.552325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.552336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.552508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.552518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.552690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.552700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.553090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.553101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.553421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.553430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.553756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.553767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.553964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.553975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.554145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.554156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.554354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.921 [2024-11-25 13:05:49.554364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.921 qpair failed and we were unable to recover it. 00:31:09.921 [2024-11-25 13:05:49.554650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.554660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.554968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.554979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.555180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.555191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.555540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.555550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.555753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.555763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.555948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.555958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.556307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.556317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.556660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.556670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.556954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.556965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.557289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.557301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.557599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.557609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.557915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.557925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.558160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.558171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.558568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.558579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.558883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.558894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.559195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.559204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.559423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.559433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.559750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.559760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.559973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.559983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.560293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.560303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.560671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.560681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.560994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.561006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.561328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.561339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.561667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.922 [2024-11-25 13:05:49.561678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.922 qpair failed and we were unable to recover it. 00:31:09.922 [2024-11-25 13:05:49.561859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.561875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.562082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.562092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.562508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.562520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.562836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.562847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.563168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.563179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.563602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.563613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.563841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.563852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.564187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.564198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.564370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.564380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.564572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.564582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.564958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.564969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.565138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.565149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.565441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.565452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.565794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.565806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.566125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.566135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.566459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.566470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.566661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.566671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.566896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.566906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.567253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.567265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.567595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.567606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.567926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.567938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.568108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.568118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.568351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.568361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.568671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.568685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.568922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.568933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.569253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.569263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.569591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.569603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.569970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.569981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.570291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.570301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.923 [2024-11-25 13:05:49.570556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.923 [2024-11-25 13:05:49.570567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.923 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.570859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.570873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.571064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.571073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.571392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.571401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.571587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.571598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.571886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.571896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.572094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.572106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.572514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.572524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.572819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.572830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.573047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.573057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.573375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.573385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.573709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.573720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.574040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.574051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.574377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.574389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.574557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.574568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.574762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.574773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.575094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.575106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.575435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.575446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.575835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.575845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.575987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.575997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.576257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.576267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.576462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.576480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.576700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.576711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.576907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.576918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.577327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.577340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.577643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.577655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.577852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.577870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.578193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.578205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.578535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.578547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.578958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.578969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.579136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.579146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.924 qpair failed and we were unable to recover it. 00:31:09.924 [2024-11-25 13:05:49.579326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.924 [2024-11-25 13:05:49.579337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.579502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.579512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.579696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.579708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.580013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.580023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.580191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.580202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.580608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.580618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.580788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.580800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.581102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.581113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.581316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.581326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.581624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.581637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.925 [2024-11-25 13:05:49.581859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.581877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:09.925 [2024-11-25 13:05:49.582266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.582280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:09.925 [2024-11-25 13:05:49.582611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.582624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:09.925 [2024-11-25 13:05:49.582807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.582820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.582885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.582895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.925 [2024-11-25 13:05:49.583089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.583100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.583407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.583418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.583620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.583631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.583934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.583946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.584137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.584147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.584481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.584492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.584821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.584831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.585007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.585019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.585372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.585384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.585763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.585773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.585984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.585996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.586174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.586185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.586375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.586388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.586697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.925 [2024-11-25 13:05:49.586709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.925 qpair failed and we were unable to recover it. 00:31:09.925 [2024-11-25 13:05:49.586934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.586948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.587213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.587225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.587524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.587534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.587873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.587887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.588210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.588221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.588533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.588544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.588879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.588891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.589248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.589260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.589598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.589609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.589658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.589668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.589986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.589997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.590330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.590340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.590543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.590553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.590714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.590724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.591015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.591026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.591350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.591360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.591709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.591720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.591962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.591975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.592187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.592199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.592370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.592381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.592799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.592810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.593123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.593134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.593425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.593435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.593740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.593750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.594083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.594094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.594291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.594309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.594651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.594662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.594992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.595004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.595182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.595195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.595411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.595422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.926 [2024-11-25 13:05:49.595749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.926 [2024-11-25 13:05:49.595762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.926 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.595955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.595966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.596119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.596131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.596525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.596618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.597099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.597196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.597627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.597664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f22e8000b90 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.597991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.598004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.598332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.598342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.598648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.598660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.598876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.598887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.599266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.599277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.599622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.599633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.599931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.599943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.600142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.600152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.600368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.600378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.600692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.600704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.600912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.600922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.601268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.601279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.601621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.601632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.601978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.601988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.602198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.602208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.602584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.602595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.602894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.602905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.602950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.602960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.603285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.603295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.603456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.603468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.603871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.603882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.604085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.604096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.927 [2024-11-25 13:05:49.604465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.927 [2024-11-25 13:05:49.604476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.927 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.604638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.604650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.604886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.604897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.605216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.605228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.605462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.605472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.605658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.605667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.605853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.605877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.606193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.606205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.606586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.606598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.606972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.606984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.607285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.607294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.607591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.607602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.607935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.607948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.608281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.608291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.608647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.608658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.608989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.609001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.609313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.609323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.609672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.609684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.609990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.610000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.610354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.610366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.610712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.610723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.610941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.610951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.611265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.611276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.611474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.611485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.611837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.611849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.612189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.612200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.612523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.612533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.928 qpair failed and we were unable to recover it. 00:31:09.928 [2024-11-25 13:05:49.612847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.928 [2024-11-25 13:05:49.612858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.612992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.613003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.613289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.613300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.613623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.613634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.613857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.613872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.614260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.614270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.614586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.614596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.614762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.614772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.615167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.615178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.615587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.615597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.615941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.615952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.616262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.616272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.616579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.616590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.616904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.616918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.617248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.617258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.617571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.617582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.617931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.617941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.618255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.618266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.618586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.618596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.618957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.618968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.619274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.619284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.619478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.619489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.619866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.619879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.620004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.620015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.620431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.620444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.620747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.620757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.621076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.621087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.621422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.621432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.621727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.621737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.622169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.622180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.929 [2024-11-25 13:05:49.622354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.929 [2024-11-25 13:05:49.622365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.929 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.622543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.622552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.622875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.622886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.623264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.623276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.623643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.623655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.623999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.624009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.624411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.624423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.624744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.624754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.930 [2024-11-25 13:05:49.625147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.625160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.625347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.625356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:09.930 [2024-11-25 13:05:49.625586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.625598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.930 [2024-11-25 13:05:49.625919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.625931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.930 [2024-11-25 13:05:49.626237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.626249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.626549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.626560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.626889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.626901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.627069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.627080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.627127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.627137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.627427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.627437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.627741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.627751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.627971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.627982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.628184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.628194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.628557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.628567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.628819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.628830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.629142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.629153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.629489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.629500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.629824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.629835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.630042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.630052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.630301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.930 [2024-11-25 13:05:49.630312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.930 qpair failed and we were unable to recover it. 00:31:09.930 [2024-11-25 13:05:49.630367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.630377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.630707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.630717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.631063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.631073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.631433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.631443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.631761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.631771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.632072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.632083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.632384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.632393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.632575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.632585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.632880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.632891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.633013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.633022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.633229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.633239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.633560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.633571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.633765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.633777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.634073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.634084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.634386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.634396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.634737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.634747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.635070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.635080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.635377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.635388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.635597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.635607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.635907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.635917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.636083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.636093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.636429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.636442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.636627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.636638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.637005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.637015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.637186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.637196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.637580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.637589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.637794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.637805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.638224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.638235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.638416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.931 [2024-11-25 13:05:49.638427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.931 qpair failed and we were unable to recover it. 00:31:09.931 [2024-11-25 13:05:49.638754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.638764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.638944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.638962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.639359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.639369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.639541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.639552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.639918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.639929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.640286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.640295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.640505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.640516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.640814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.640823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.641155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.641166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.641342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.641354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.641643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.641654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.641976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.641987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.642316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.642326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.642682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.642692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.643014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.643025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.643321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.643331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.643660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.643671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.644005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.644016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.644410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.644420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.644748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.644757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.645092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.645103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.645436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.645446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.645666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.645676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.645902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.645912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.646234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.646244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.646409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.646419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.646602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.646612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.646908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.646920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.647100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.647109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.932 [2024-11-25 13:05:49.647470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.932 [2024-11-25 13:05:49.647480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.932 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.647666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.647676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.647896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.647906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.648353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.648363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.648648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.648660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.649040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.649050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.649353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.649362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.649655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.649665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.649985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.649996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.650309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.650320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.650496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.650506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.650806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.650816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.651040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.651051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.651340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.651350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.651553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.651563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.651826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.651837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.652172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.652183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.652384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.652393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.652630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.652640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.652807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.652817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.653120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.653130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.653440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.653450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.653761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.653771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.654105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.654117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.654420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.654430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.654762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.654772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.655109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.655120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.655466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.655476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.655812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.655823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.656110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.656121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.656418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.656428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.656751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.933 [2024-11-25 13:05:49.656764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.933 qpair failed and we were unable to recover it. 00:31:09.933 [2024-11-25 13:05:49.656970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.656981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.657269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.657279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.657581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.657591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.657948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.657959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.658131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.658143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.658357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.658367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.658593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.658603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.658888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.658903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 Malloc0 00:31:09.934 [2024-11-25 13:05:49.659286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.659297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.659501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.659511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.659857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.659875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.934 [2024-11-25 13:05:49.660129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.660141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:09.934 [2024-11-25 13:05:49.660495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.660506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.934 [2024-11-25 13:05:49.660902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.660912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.934 [2024-11-25 13:05:49.661279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.661289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.661594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.661603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.661899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.661910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.662018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.662027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.662351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.662362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.662551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.662562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.662872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.662883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.663052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.663062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.663256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.934 [2024-11-25 13:05:49.663265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.934 qpair failed and we were unable to recover it. 00:31:09.934 [2024-11-25 13:05:49.663482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.663491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.663809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.663819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.664209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.664219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.664607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.664617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.664941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.664951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.665266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.665276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.665574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.665583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.665875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.665885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.666131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.666141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.666447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.666457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.666533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.935 [2024-11-25 13:05:49.666783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.666794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.667085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.667096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.667404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.667414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.667769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.667779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.668112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.668123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.668431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.668441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.668761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.668770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.669091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.669102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.669282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.669292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.669631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.669642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.669959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.669969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.670282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.670292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.670597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.670607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.671011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.671021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.671403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.671414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.671715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.671725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.671787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.671796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.672152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.672162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.672523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.935 [2024-11-25 13:05:49.672534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.935 qpair failed and we were unable to recover it. 00:31:09.935 [2024-11-25 13:05:49.672878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.672889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.673259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.673269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.673574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.673583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.673900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.673911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.674150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.674159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.674333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.674343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.674659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.674668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.674996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.675006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.675320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.675331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.936 [2024-11-25 13:05:49.675644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.675655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:09.936 [2024-11-25 13:05:49.676138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.676150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.936 [2024-11-25 13:05:49.676345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.676356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.936 [2024-11-25 13:05:49.676592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.676603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.676930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.676940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.677141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.677151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.677393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.677403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.677714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.677724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.678044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.678055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.678107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.678117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.678431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.678441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.678631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.678642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.678957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.678967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.679278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.679288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.679460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.679470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.679695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.679705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.679880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.679893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.680249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.680259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.680603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.680614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.680670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.936 [2024-11-25 13:05:49.680679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.936 qpair failed and we were unable to recover it. 00:31:09.936 [2024-11-25 13:05:49.681003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.681013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.681065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.681074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.681355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.681365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.681817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.681827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.681907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.681922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.682291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.682301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.682486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.682496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.682787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.682796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.683214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.683224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.683615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.683625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.683990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.684002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.684324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.684334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.684659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.684669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.684980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.684991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.685162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.685173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.685476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.685487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.685826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.685836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.686153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.686163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.686475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.686485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.686867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.686877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.687055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.687065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.687405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.687415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.937 [2024-11-25 13:05:49.687762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.687772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:09.937 [2024-11-25 13:05:49.687941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.687952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.937 [2024-11-25 13:05:49.688331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.688341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.937 [2024-11-25 13:05:49.688642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.688653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.688968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.688978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.689301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.689311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.937 [2024-11-25 13:05:49.689537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.937 [2024-11-25 13:05:49.689547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.937 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.689858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.689882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.690013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.690023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.690342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.690352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.690692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.690704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.690903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.690914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.691105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.691115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.691392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.691402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.691739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.691749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.692135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.692146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.692344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.692355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.692528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.692537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.692848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.692858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.693031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.693041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.693481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.693491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.693794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.693805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.694109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.694121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.694299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.694310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.694492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.694502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.694554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.694566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.694847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.694858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.695177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.695188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.695376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.695387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.695728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.695739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.695941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.695952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.696281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.696291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.696489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.696499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.696671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.696682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.696944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.696955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.697167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.697178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.697497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.697508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.697866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.938 [2024-11-25 13:05:49.697878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.938 qpair failed and we were unable to recover it. 00:31:09.938 [2024-11-25 13:05:49.698083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.698094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.698279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.698291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.698610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.698621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.698937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.698948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.699221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.699232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.699588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.699599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.939 [2024-11-25 13:05:49.699787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.699799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.699986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.699998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:09.939 [2024-11-25 13:05:49.700245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.700256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.939 [2024-11-25 13:05:49.700436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.700448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.939 [2024-11-25 13:05:49.700796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.700807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.701152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.701163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.701355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.701366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.701656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.701667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.701889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.701901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.702086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.702097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.702435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.702446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.702783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.702794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.703176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.703187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.703495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.703505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.703848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.703859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.704189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.704200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.704387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.704397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.704712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.704722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.705117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.705128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.705298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.705309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.705616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.939 [2024-11-25 13:05:49.705626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.939 qpair failed and we were unable to recover it. 00:31:09.939 [2024-11-25 13:05:49.705937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.940 [2024-11-25 13:05:49.705951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.940 qpair failed and we were unable to recover it. 00:31:09.940 [2024-11-25 13:05:49.706135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.940 [2024-11-25 13:05:49.706147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.940 qpair failed and we were unable to recover it. 00:31:09.940 [2024-11-25 13:05:49.706392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.940 [2024-11-25 13:05:49.706403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.940 qpair failed and we were unable to recover it. 00:31:09.940 [2024-11-25 13:05:49.706597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.940 [2024-11-25 13:05:49.706608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8490 with addr=10.0.0.2, port=4420 00:31:09.940 qpair failed and we were unable to recover it. 00:31:09.940 [2024-11-25 13:05:49.706797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.940 [2024-11-25 13:05:49.707600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.940 [2024-11-25 13:05:49.707711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.940 [2024-11-25 13:05:49.707731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.940 [2024-11-25 13:05:49.707739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.940 [2024-11-25 13:05:49.707746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:09.940 [2024-11-25 13:05:49.707765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.940 qpair failed and we were unable to recover it. 00:31:09.940 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.940 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:09.940 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.940 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:09.940 [2024-11-25 13:05:49.717405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.940 [2024-11-25 13:05:49.717464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.940 [2024-11-25 13:05:49.717480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.940 [2024-11-25 13:05:49.717488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.940 [2024-11-25 13:05:49.717494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:09.940 [2024-11-25 13:05:49.717509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.940 qpair failed and we were unable to recover it. 00:31:09.940 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.940 13:05:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 836717 00:31:09.940 [2024-11-25 13:05:49.727477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.940 [2024-11-25 13:05:49.727537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.940 [2024-11-25 13:05:49.727557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.940 [2024-11-25 13:05:49.727565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.940 [2024-11-25 13:05:49.727571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:09.940 [2024-11-25 13:05:49.727586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.940 qpair failed and we were unable to recover it. 00:31:09.940 [2024-11-25 13:05:49.737411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.940 [2024-11-25 13:05:49.737470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.940 [2024-11-25 13:05:49.737484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.940 [2024-11-25 13:05:49.737491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.940 [2024-11-25 13:05:49.737498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:09.940 [2024-11-25 13:05:49.737511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.940 qpair failed and we were unable to recover it. 00:31:09.940 [2024-11-25 13:05:49.747391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.940 [2024-11-25 13:05:49.747459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.940 [2024-11-25 13:05:49.747472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.940 [2024-11-25 13:05:49.747480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.940 [2024-11-25 13:05:49.747486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:09.940 [2024-11-25 13:05:49.747500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.940 qpair failed and we were unable to recover it. 00:31:09.940 [2024-11-25 13:05:49.757443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.940 [2024-11-25 13:05:49.757504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.940 [2024-11-25 13:05:49.757517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.940 [2024-11-25 13:05:49.757524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.940 [2024-11-25 13:05:49.757531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:09.941 [2024-11-25 13:05:49.757545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.941 qpair failed and we were unable to recover it. 00:31:09.941 [2024-11-25 13:05:49.767447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.941 [2024-11-25 13:05:49.767539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.941 [2024-11-25 13:05:49.767553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.941 [2024-11-25 13:05:49.767564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.941 [2024-11-25 13:05:49.767570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:09.941 [2024-11-25 13:05:49.767584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.941 qpair failed and we were unable to recover it. 00:31:09.941 [2024-11-25 13:05:49.777455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.941 [2024-11-25 13:05:49.777517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.941 [2024-11-25 13:05:49.777543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.941 [2024-11-25 13:05:49.777552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.941 [2024-11-25 13:05:49.777559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:09.941 [2024-11-25 13:05:49.777579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.941 qpair failed and we were unable to recover it. 00:31:09.941 [2024-11-25 13:05:49.787512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.941 [2024-11-25 13:05:49.787570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.941 [2024-11-25 13:05:49.787596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.941 [2024-11-25 13:05:49.787604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.941 [2024-11-25 13:05:49.787611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:09.941 [2024-11-25 13:05:49.787631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.941 qpair failed and we were unable to recover it. 00:31:09.941 [2024-11-25 13:05:49.797507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.941 [2024-11-25 13:05:49.797591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.941 [2024-11-25 13:05:49.797607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.941 [2024-11-25 13:05:49.797614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.941 [2024-11-25 13:05:49.797621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:09.941 [2024-11-25 13:05:49.797636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.941 qpair failed and we were unable to recover it. 00:31:09.941 [2024-11-25 13:05:49.807523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:09.941 [2024-11-25 13:05:49.807578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:09.941 [2024-11-25 13:05:49.807592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:09.941 [2024-11-25 13:05:49.807599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.941 [2024-11-25 13:05:49.807606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:09.941 [2024-11-25 13:05:49.807620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:09.941 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.817586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.817655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.817668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.817675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.817682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.817695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.827579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.827637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.827650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.827657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.827664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.827677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.837478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.837539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.837552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.837559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.837565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.837578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.847639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.847693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.847707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.847714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.847720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.847733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.857693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.857783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.857800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.857807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.857814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.857828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.867707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.867760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.867773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.867780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.867787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.867800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.877714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.877762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.877775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.877782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.877788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.877802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.887804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.887906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.887920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.887927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.887933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.887947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.897760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.897819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.897833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.897844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.897850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.897869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.907831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.907891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.907905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.907912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.907918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.907931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.917899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.918001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.918014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.918021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.918027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.918041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.927983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.928048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.928061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.928069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.203 [2024-11-25 13:05:49.928075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.203 [2024-11-25 13:05:49.928089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.203 qpair failed and we were unable to recover it. 00:31:10.203 [2024-11-25 13:05:49.937813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.203 [2024-11-25 13:05:49.937881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.203 [2024-11-25 13:05:49.937895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.203 [2024-11-25 13:05:49.937902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:49.937909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:49.937922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:49.947960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:49.948021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:49.948034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:49.948041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:49.948048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:49.948061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:49.958009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:49.958078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:49.958091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:49.958099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:49.958105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:49.958119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:49.967952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:49.968010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:49.968023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:49.968030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:49.968037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:49.968050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:49.977891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:49.977988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:49.978001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:49.978009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:49.978015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:49.978028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:49.987911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:49.987968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:49.987984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:49.987991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:49.987997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:49.988011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:49.998102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:49.998166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:49.998180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:49.998188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:49.998194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:49.998207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:50.008186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:50.008240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:50.008256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:50.008264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:50.008270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:50.008285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:50.018122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:50.018179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:50.018192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:50.018200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:50.018206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:50.018220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:50.028239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:50.028295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:50.028308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:50.028319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:50.028325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:50.028339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:50.038183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:50.038234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:50.038247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:50.038254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:50.038261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:50.038274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:50.048162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:50.048211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:50.048225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:50.048232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:50.048239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:50.048254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:50.058189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:50.058244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:50.058257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:50.058264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.204 [2024-11-25 13:05:50.058270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.204 [2024-11-25 13:05:50.058283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.204 qpair failed and we were unable to recover it. 00:31:10.204 [2024-11-25 13:05:50.068263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.204 [2024-11-25 13:05:50.068357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.204 [2024-11-25 13:05:50.068370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.204 [2024-11-25 13:05:50.068377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.205 [2024-11-25 13:05:50.068384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.205 [2024-11-25 13:05:50.068398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.205 qpair failed and we were unable to recover it. 00:31:10.205 [2024-11-25 13:05:50.078249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.205 [2024-11-25 13:05:50.078324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.205 [2024-11-25 13:05:50.078338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.205 [2024-11-25 13:05:50.078345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.205 [2024-11-25 13:05:50.078351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.205 [2024-11-25 13:05:50.078365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.205 qpair failed and we were unable to recover it. 00:31:10.205 [2024-11-25 13:05:50.088285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.205 [2024-11-25 13:05:50.088342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.205 [2024-11-25 13:05:50.088355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.205 [2024-11-25 13:05:50.088362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.205 [2024-11-25 13:05:50.088368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.205 [2024-11-25 13:05:50.088382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.205 qpair failed and we were unable to recover it. 00:31:10.205 [2024-11-25 13:05:50.098321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.205 [2024-11-25 13:05:50.098378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.205 [2024-11-25 13:05:50.098394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.205 [2024-11-25 13:05:50.098402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.205 [2024-11-25 13:05:50.098408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.205 [2024-11-25 13:05:50.098422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.205 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.108364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.108429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.108443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.108450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.108456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.108470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.467 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.118405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.118460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.118477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.118484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.118490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.118503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.467 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.128269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.128324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.128338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.128345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.128352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.128365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.467 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.138410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.138464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.138477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.138484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.138491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.138505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.467 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.148460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.148520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.148534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.148541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.148547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.148561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.467 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.158525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.158585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.158598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.158609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.158615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.158628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.467 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.168541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.168615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.168641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.168649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.168657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.168676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.467 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.178527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.178591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.178617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.178627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.178635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.178655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.467 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.188537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.188597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.188613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.188621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.188627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.188642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.467 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.198565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.198629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.198643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.198650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.198657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.198671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.467 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.208480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.208537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.208552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.208559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.208566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.208580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.467 qpair failed and we were unable to recover it. 00:31:10.467 [2024-11-25 13:05:50.218684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.467 [2024-11-25 13:05:50.218741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.467 [2024-11-25 13:05:50.218755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.467 [2024-11-25 13:05:50.218762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.467 [2024-11-25 13:05:50.218768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.467 [2024-11-25 13:05:50.218781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.228679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.228739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.228764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.228773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.228780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.228800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.238687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.238744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.238759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.238766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.238773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.238787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.248737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.248791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.248809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.248817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.248823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.248837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.258742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.258800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.258813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.258821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.258827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.258840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.268761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.268834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.268847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.268854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.268860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.268879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.278800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.278853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.278870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.278877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.278884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.278898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.288840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.288899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.288914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.288925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.288936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.288950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.298856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.298919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.298933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.298940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.298946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.298960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.308877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.308938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.308952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.308959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.308965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.308979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.318916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.318965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.318978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.318985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.318992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.319005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.328933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.328988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.329001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.329008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.329014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.329028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.338960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.339014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.339028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.339035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.339042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.339055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.468 [2024-11-25 13:05:50.349007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.468 [2024-11-25 13:05:50.349067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.468 [2024-11-25 13:05:50.349080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.468 [2024-11-25 13:05:50.349087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.468 [2024-11-25 13:05:50.349093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.468 [2024-11-25 13:05:50.349107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.468 qpair failed and we were unable to recover it. 00:31:10.469 [2024-11-25 13:05:50.359007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.469 [2024-11-25 13:05:50.359091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.469 [2024-11-25 13:05:50.359104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.469 [2024-11-25 13:05:50.359111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.469 [2024-11-25 13:05:50.359117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.469 [2024-11-25 13:05:50.359130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.469 qpair failed and we were unable to recover it. 00:31:10.731 [2024-11-25 13:05:50.369026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.731 [2024-11-25 13:05:50.369085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.731 [2024-11-25 13:05:50.369098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.731 [2024-11-25 13:05:50.369105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.731 [2024-11-25 13:05:50.369111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.731 [2024-11-25 13:05:50.369125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-11-25 13:05:50.379108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.731 [2024-11-25 13:05:50.379175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.731 [2024-11-25 13:05:50.379191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.731 [2024-11-25 13:05:50.379198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.731 [2024-11-25 13:05:50.379205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.731 [2024-11-25 13:05:50.379218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-11-25 13:05:50.389096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.731 [2024-11-25 13:05:50.389152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.731 [2024-11-25 13:05:50.389165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.731 [2024-11-25 13:05:50.389172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.731 [2024-11-25 13:05:50.389178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.731 [2024-11-25 13:05:50.389192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-11-25 13:05:50.399102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.731 [2024-11-25 13:05:50.399151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.731 [2024-11-25 13:05:50.399165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.731 [2024-11-25 13:05:50.399172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.731 [2024-11-25 13:05:50.399180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.731 [2024-11-25 13:05:50.399193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-11-25 13:05:50.409153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.731 [2024-11-25 13:05:50.409207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.731 [2024-11-25 13:05:50.409220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.731 [2024-11-25 13:05:50.409227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.731 [2024-11-25 13:05:50.409234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.731 [2024-11-25 13:05:50.409247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-11-25 13:05:50.419200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.731 [2024-11-25 13:05:50.419261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.731 [2024-11-25 13:05:50.419274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.731 [2024-11-25 13:05:50.419285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.731 [2024-11-25 13:05:50.419291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.731 [2024-11-25 13:05:50.419305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.731 [2024-11-25 13:05:50.429225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.731 [2024-11-25 13:05:50.429333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.731 [2024-11-25 13:05:50.429346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.731 [2024-11-25 13:05:50.429353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.731 [2024-11-25 13:05:50.429360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.731 [2024-11-25 13:05:50.429374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.731 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.439103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.439158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.439171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.439178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.439185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.439199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.449131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.449198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.449212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.449219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.449225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.449240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.459176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.459229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.459243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.459250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.459256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.459269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.469331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.469426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.469440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.469447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.469453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.469467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.479332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.479395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.479408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.479415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.479421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.479434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.489364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.489419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.489433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.489440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.489446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.489460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.499358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.499413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.499427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.499434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.499440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.499453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.509436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.509497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.509513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.509520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.509527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.509540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.519439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.519498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.519513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.519520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.519526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.519539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.529479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.529556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.529569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.529576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.529583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.529596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.539485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.539542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.539556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.539562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.539569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.539582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.549556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.732 [2024-11-25 13:05:50.549611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.732 [2024-11-25 13:05:50.549624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.732 [2024-11-25 13:05:50.549634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.732 [2024-11-25 13:05:50.549641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.732 [2024-11-25 13:05:50.549654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.732 qpair failed and we were unable to recover it. 00:31:10.732 [2024-11-25 13:05:50.559564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.733 [2024-11-25 13:05:50.559623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.733 [2024-11-25 13:05:50.559647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.733 [2024-11-25 13:05:50.559656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.733 [2024-11-25 13:05:50.559663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.733 [2024-11-25 13:05:50.559682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-11-25 13:05:50.569514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.733 [2024-11-25 13:05:50.569570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.733 [2024-11-25 13:05:50.569587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.733 [2024-11-25 13:05:50.569594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.733 [2024-11-25 13:05:50.569600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.733 [2024-11-25 13:05:50.569615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-11-25 13:05:50.579615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.733 [2024-11-25 13:05:50.579672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.733 [2024-11-25 13:05:50.579686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.733 [2024-11-25 13:05:50.579693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.733 [2024-11-25 13:05:50.579699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.733 [2024-11-25 13:05:50.579713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-11-25 13:05:50.589673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.733 [2024-11-25 13:05:50.589738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.733 [2024-11-25 13:05:50.589763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.733 [2024-11-25 13:05:50.589772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.733 [2024-11-25 13:05:50.589779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.733 [2024-11-25 13:05:50.589798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-11-25 13:05:50.599676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.733 [2024-11-25 13:05:50.599730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.733 [2024-11-25 13:05:50.599746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.733 [2024-11-25 13:05:50.599753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.733 [2024-11-25 13:05:50.599760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.733 [2024-11-25 13:05:50.599775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-11-25 13:05:50.609714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.733 [2024-11-25 13:05:50.609806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.733 [2024-11-25 13:05:50.609820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.733 [2024-11-25 13:05:50.609828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.733 [2024-11-25 13:05:50.609834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.733 [2024-11-25 13:05:50.609848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-11-25 13:05:50.619742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.733 [2024-11-25 13:05:50.619800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.733 [2024-11-25 13:05:50.619814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.733 [2024-11-25 13:05:50.619821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.733 [2024-11-25 13:05:50.619828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.733 [2024-11-25 13:05:50.619842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.733 [2024-11-25 13:05:50.629781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.733 [2024-11-25 13:05:50.629838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.733 [2024-11-25 13:05:50.629852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.733 [2024-11-25 13:05:50.629859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.733 [2024-11-25 13:05:50.629870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.733 [2024-11-25 13:05:50.629884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.733 qpair failed and we were unable to recover it. 00:31:10.997 [2024-11-25 13:05:50.639772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.997 [2024-11-25 13:05:50.639830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.997 [2024-11-25 13:05:50.639846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.997 [2024-11-25 13:05:50.639853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.997 [2024-11-25 13:05:50.639860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.997 [2024-11-25 13:05:50.639878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.997 qpair failed and we were unable to recover it. 00:31:10.997 [2024-11-25 13:05:50.649818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.997 [2024-11-25 13:05:50.649874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.997 [2024-11-25 13:05:50.649887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.997 [2024-11-25 13:05:50.649894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.997 [2024-11-25 13:05:50.649901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.997 [2024-11-25 13:05:50.649914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.997 qpair failed and we were unable to recover it. 00:31:10.997 [2024-11-25 13:05:50.659853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.997 [2024-11-25 13:05:50.659936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.997 [2024-11-25 13:05:50.659950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.997 [2024-11-25 13:05:50.659957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.997 [2024-11-25 13:05:50.659963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.997 [2024-11-25 13:05:50.659977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.997 qpair failed and we were unable to recover it. 00:31:10.997 [2024-11-25 13:05:50.669889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.997 [2024-11-25 13:05:50.669950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.997 [2024-11-25 13:05:50.669963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.997 [2024-11-25 13:05:50.669970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.997 [2024-11-25 13:05:50.669976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.997 [2024-11-25 13:05:50.669990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.997 qpair failed and we were unable to recover it. 00:31:10.997 [2024-11-25 13:05:50.679902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.997 [2024-11-25 13:05:50.679954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.997 [2024-11-25 13:05:50.679968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.997 [2024-11-25 13:05:50.679976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.997 [2024-11-25 13:05:50.679986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.997 [2024-11-25 13:05:50.680001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.997 qpair failed and we were unable to recover it. 00:31:10.997 [2024-11-25 13:05:50.689937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.997 [2024-11-25 13:05:50.689994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.997 [2024-11-25 13:05:50.690008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.997 [2024-11-25 13:05:50.690016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.997 [2024-11-25 13:05:50.690022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.997 [2024-11-25 13:05:50.690035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.997 qpair failed and we were unable to recover it. 00:31:10.997 [2024-11-25 13:05:50.699934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.997 [2024-11-25 13:05:50.699989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.997 [2024-11-25 13:05:50.700002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.997 [2024-11-25 13:05:50.700009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.997 [2024-11-25 13:05:50.700015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.997 [2024-11-25 13:05:50.700029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.997 qpair failed and we were unable to recover it. 00:31:10.997 [2024-11-25 13:05:50.710025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.997 [2024-11-25 13:05:50.710084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.997 [2024-11-25 13:05:50.710097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.997 [2024-11-25 13:05:50.710104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.997 [2024-11-25 13:05:50.710110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.997 [2024-11-25 13:05:50.710124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.997 qpair failed and we were unable to recover it. 00:31:10.997 [2024-11-25 13:05:50.720018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.997 [2024-11-25 13:05:50.720078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.997 [2024-11-25 13:05:50.720091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.997 [2024-11-25 13:05:50.720098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.997 [2024-11-25 13:05:50.720105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.997 [2024-11-25 13:05:50.720118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.997 qpair failed and we were unable to recover it. 00:31:10.997 [2024-11-25 13:05:50.729967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.997 [2024-11-25 13:05:50.730016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.997 [2024-11-25 13:05:50.730030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.997 [2024-11-25 13:05:50.730037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.730044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.730057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.740008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.740109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.740122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.740129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.740135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.740149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.750131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.750234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.750247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.750255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.750261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.750274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.760158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.760210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.760223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.760230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.760237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.760250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.770165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.770220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.770237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.770244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.770250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.770264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.780099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.780161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.780175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.780182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.780188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.780201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.790242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.790296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.790309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.790316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.790323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.790336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.800252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.800305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.800318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.800325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.800331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.800344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.810315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.810392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.810406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.810412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.810422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.810435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.820320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.820378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.820391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.820399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.820405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.820418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.830228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.830289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.830302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.830309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.830315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.830329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.840369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.840421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.840434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.840441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.840448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.840461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.998 qpair failed and we were unable to recover it. 00:31:10.998 [2024-11-25 13:05:50.850373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.998 [2024-11-25 13:05:50.850423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.998 [2024-11-25 13:05:50.850436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.998 [2024-11-25 13:05:50.850443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.998 [2024-11-25 13:05:50.850449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.998 [2024-11-25 13:05:50.850462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.999 qpair failed and we were unable to recover it. 00:31:10.999 [2024-11-25 13:05:50.860429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.999 [2024-11-25 13:05:50.860483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.999 [2024-11-25 13:05:50.860496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.999 [2024-11-25 13:05:50.860503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.999 [2024-11-25 13:05:50.860509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.999 [2024-11-25 13:05:50.860522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.999 qpair failed and we were unable to recover it. 00:31:10.999 [2024-11-25 13:05:50.870330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.999 [2024-11-25 13:05:50.870391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.999 [2024-11-25 13:05:50.870404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.999 [2024-11-25 13:05:50.870411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.999 [2024-11-25 13:05:50.870417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.999 [2024-11-25 13:05:50.870430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.999 qpair failed and we were unable to recover it. 00:31:10.999 [2024-11-25 13:05:50.880488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.999 [2024-11-25 13:05:50.880538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.999 [2024-11-25 13:05:50.880551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.999 [2024-11-25 13:05:50.880558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.999 [2024-11-25 13:05:50.880564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.999 [2024-11-25 13:05:50.880578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.999 qpair failed and we were unable to recover it. 00:31:10.999 [2024-11-25 13:05:50.890509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:10.999 [2024-11-25 13:05:50.890556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:10.999 [2024-11-25 13:05:50.890569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:10.999 [2024-11-25 13:05:50.890576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:10.999 [2024-11-25 13:05:50.890583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:10.999 [2024-11-25 13:05:50.890596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:10.999 qpair failed and we were unable to recover it. 00:31:11.261 [2024-11-25 13:05:50.900436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.261 [2024-11-25 13:05:50.900488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.261 [2024-11-25 13:05:50.900504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.261 [2024-11-25 13:05:50.900511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.261 [2024-11-25 13:05:50.900518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.261 [2024-11-25 13:05:50.900531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.261 qpair failed and we were unable to recover it. 00:31:11.261 [2024-11-25 13:05:50.910446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.261 [2024-11-25 13:05:50.910504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.261 [2024-11-25 13:05:50.910517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.261 [2024-11-25 13:05:50.910524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.261 [2024-11-25 13:05:50.910530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.261 [2024-11-25 13:05:50.910543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.261 qpair failed and we were unable to recover it. 00:31:11.261 [2024-11-25 13:05:50.920591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.261 [2024-11-25 13:05:50.920650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.261 [2024-11-25 13:05:50.920663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.261 [2024-11-25 13:05:50.920669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.261 [2024-11-25 13:05:50.920676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.261 [2024-11-25 13:05:50.920689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.261 qpair failed and we were unable to recover it. 00:31:11.261 [2024-11-25 13:05:50.930637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.261 [2024-11-25 13:05:50.930690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.261 [2024-11-25 13:05:50.930703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.261 [2024-11-25 13:05:50.930710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.261 [2024-11-25 13:05:50.930717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.261 [2024-11-25 13:05:50.930730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.261 qpair failed and we were unable to recover it. 00:31:11.261 [2024-11-25 13:05:50.940609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.261 [2024-11-25 13:05:50.940668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.261 [2024-11-25 13:05:50.940681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.261 [2024-11-25 13:05:50.940688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.261 [2024-11-25 13:05:50.940698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.261 [2024-11-25 13:05:50.940711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.261 qpair failed and we were unable to recover it. 00:31:11.261 [2024-11-25 13:05:50.950689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.261 [2024-11-25 13:05:50.950775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.261 [2024-11-25 13:05:50.950788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.261 [2024-11-25 13:05:50.950796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.261 [2024-11-25 13:05:50.950802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.261 [2024-11-25 13:05:50.950815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.261 qpair failed and we were unable to recover it. 00:31:11.261 [2024-11-25 13:05:50.960698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:50.960752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:50.960765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:50.960772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:50.960778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:50.960792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:50.970732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:50.970788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:50.970801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:50.970808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:50.970814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:50.970827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:50.980773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:50.980829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:50.980842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:50.980849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:50.980856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:50.980874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:50.990791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:50.990850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:50.990876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:50.990883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:50.990890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:50.990903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:51.000835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:51.000921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:51.000936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:51.000943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:51.000949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:51.000962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:51.010848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:51.010904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:51.010918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:51.010925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:51.010932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:51.010945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:51.020902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:51.020957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:51.020970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:51.020977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:51.020983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:51.020997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:51.030900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:51.030960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:51.030976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:51.030983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:51.030989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:51.031003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:51.040935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:51.040990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:51.041004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:51.041010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:51.041017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:51.041031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:51.050937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:51.050988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:51.051001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:51.051008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:51.051014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:51.051028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:51.060999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:51.061067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:51.061081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:51.061088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:51.061094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:51.061108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:51.071011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:51.071069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:51.071084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:51.071091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:51.071101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:51.071119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:51.081002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.262 [2024-11-25 13:05:51.081055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.262 [2024-11-25 13:05:51.081069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.262 [2024-11-25 13:05:51.081076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.262 [2024-11-25 13:05:51.081082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.262 [2024-11-25 13:05:51.081096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.262 qpair failed and we were unable to recover it. 00:31:11.262 [2024-11-25 13:05:51.091056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.263 [2024-11-25 13:05:51.091142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.263 [2024-11-25 13:05:51.091155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.263 [2024-11-25 13:05:51.091162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.263 [2024-11-25 13:05:51.091168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.263 [2024-11-25 13:05:51.091182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.263 qpair failed and we were unable to recover it. 00:31:11.263 [2024-11-25 13:05:51.101093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.263 [2024-11-25 13:05:51.101150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.263 [2024-11-25 13:05:51.101166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.263 [2024-11-25 13:05:51.101173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.263 [2024-11-25 13:05:51.101180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.263 [2024-11-25 13:05:51.101194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.263 qpair failed and we were unable to recover it. 00:31:11.263 [2024-11-25 13:05:51.111126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.263 [2024-11-25 13:05:51.111185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.263 [2024-11-25 13:05:51.111198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.263 [2024-11-25 13:05:51.111205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.263 [2024-11-25 13:05:51.111212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.263 [2024-11-25 13:05:51.111225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.263 qpair failed and we were unable to recover it. 00:31:11.263 [2024-11-25 13:05:51.121114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.263 [2024-11-25 13:05:51.121172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.263 [2024-11-25 13:05:51.121185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.263 [2024-11-25 13:05:51.121192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.263 [2024-11-25 13:05:51.121199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.263 [2024-11-25 13:05:51.121212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.263 qpair failed and we were unable to recover it. 00:31:11.263 [2024-11-25 13:05:51.131175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.263 [2024-11-25 13:05:51.131229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.263 [2024-11-25 13:05:51.131242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.263 [2024-11-25 13:05:51.131249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.263 [2024-11-25 13:05:51.131256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.263 [2024-11-25 13:05:51.131269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.263 qpair failed and we were unable to recover it. 00:31:11.263 [2024-11-25 13:05:51.141212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.263 [2024-11-25 13:05:51.141266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.263 [2024-11-25 13:05:51.141280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.263 [2024-11-25 13:05:51.141287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.263 [2024-11-25 13:05:51.141293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.263 [2024-11-25 13:05:51.141307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.263 qpair failed and we were unable to recover it. 00:31:11.263 [2024-11-25 13:05:51.151115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.263 [2024-11-25 13:05:51.151174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.263 [2024-11-25 13:05:51.151188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.263 [2024-11-25 13:05:51.151195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.263 [2024-11-25 13:05:51.151201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.263 [2024-11-25 13:05:51.151214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.263 qpair failed and we were unable to recover it. 00:31:11.263 [2024-11-25 13:05:51.161260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.263 [2024-11-25 13:05:51.161310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.263 [2024-11-25 13:05:51.161326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.263 [2024-11-25 13:05:51.161333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.263 [2024-11-25 13:05:51.161340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.263 [2024-11-25 13:05:51.161353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.263 qpair failed and we were unable to recover it. 00:31:11.525 [2024-11-25 13:05:51.171278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.525 [2024-11-25 13:05:51.171329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.525 [2024-11-25 13:05:51.171342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.525 [2024-11-25 13:05:51.171349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.525 [2024-11-25 13:05:51.171356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.525 [2024-11-25 13:05:51.171369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.525 qpair failed and we were unable to recover it. 00:31:11.525 [2024-11-25 13:05:51.181218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.525 [2024-11-25 13:05:51.181310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.525 [2024-11-25 13:05:51.181323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.525 [2024-11-25 13:05:51.181330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.525 [2024-11-25 13:05:51.181338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.525 [2024-11-25 13:05:51.181351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.525 qpair failed and we were unable to recover it. 00:31:11.525 [2024-11-25 13:05:51.191347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.525 [2024-11-25 13:05:51.191401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.525 [2024-11-25 13:05:51.191415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.525 [2024-11-25 13:05:51.191422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.525 [2024-11-25 13:05:51.191428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.525 [2024-11-25 13:05:51.191442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.525 qpair failed and we were unable to recover it. 00:31:11.525 [2024-11-25 13:05:51.201354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.525 [2024-11-25 13:05:51.201406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.201419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.201426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.201436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.201449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.211299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.211356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.211371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.211378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.211385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.211399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.221440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.221495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.221508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.221515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.221522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.221536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.231469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.231528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.231542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.231549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.231555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.231569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.241472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.241528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.241541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.241548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.241554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.241568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.251500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.251558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.251583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.251592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.251599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.251618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.261532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.261590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.261616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.261625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.261631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.261651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.271559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.271617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.271633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.271640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.271647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.271661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.281591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.281680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.281705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.281714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.281721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.281740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.291608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.291660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.291680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.291688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.291694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.291710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.301658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.301713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.301727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.301735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.301741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.301755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.311682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.311734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.311748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.311755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.311763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.311777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.321579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.321634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.526 [2024-11-25 13:05:51.321649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.526 [2024-11-25 13:05:51.321656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.526 [2024-11-25 13:05:51.321663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.526 [2024-11-25 13:05:51.321677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.526 qpair failed and we were unable to recover it. 00:31:11.526 [2024-11-25 13:05:51.331771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.526 [2024-11-25 13:05:51.331851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.527 [2024-11-25 13:05:51.331869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.527 [2024-11-25 13:05:51.331877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.527 [2024-11-25 13:05:51.331887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.527 [2024-11-25 13:05:51.331902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.527 qpair failed and we were unable to recover it. 00:31:11.527 [2024-11-25 13:05:51.341758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.527 [2024-11-25 13:05:51.341813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.527 [2024-11-25 13:05:51.341827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.527 [2024-11-25 13:05:51.341834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.527 [2024-11-25 13:05:51.341840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.527 [2024-11-25 13:05:51.341854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.527 qpair failed and we were unable to recover it. 00:31:11.527 [2024-11-25 13:05:51.351801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.527 [2024-11-25 13:05:51.351866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.527 [2024-11-25 13:05:51.351880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.527 [2024-11-25 13:05:51.351888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.527 [2024-11-25 13:05:51.351894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.527 [2024-11-25 13:05:51.351908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.527 qpair failed and we were unable to recover it. 00:31:11.527 [2024-11-25 13:05:51.361820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.527 [2024-11-25 13:05:51.361878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.527 [2024-11-25 13:05:51.361892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.527 [2024-11-25 13:05:51.361899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.527 [2024-11-25 13:05:51.361905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.527 [2024-11-25 13:05:51.361918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.527 qpair failed and we were unable to recover it. 00:31:11.527 [2024-11-25 13:05:51.371724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.527 [2024-11-25 13:05:51.371777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.527 [2024-11-25 13:05:51.371790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.527 [2024-11-25 13:05:51.371797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.527 [2024-11-25 13:05:51.371803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.527 [2024-11-25 13:05:51.371816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.527 qpair failed and we were unable to recover it. 00:31:11.527 [2024-11-25 13:05:51.381828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.527 [2024-11-25 13:05:51.381936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.527 [2024-11-25 13:05:51.381949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.527 [2024-11-25 13:05:51.381956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.527 [2024-11-25 13:05:51.381963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.527 [2024-11-25 13:05:51.381976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.527 qpair failed and we were unable to recover it. 00:31:11.527 [2024-11-25 13:05:51.391903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.527 [2024-11-25 13:05:51.391965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.527 [2024-11-25 13:05:51.391978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.527 [2024-11-25 13:05:51.391986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.527 [2024-11-25 13:05:51.391993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.527 [2024-11-25 13:05:51.392007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.527 qpair failed and we were unable to recover it. 00:31:11.527 [2024-11-25 13:05:51.401925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.527 [2024-11-25 13:05:51.401983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.527 [2024-11-25 13:05:51.401996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.527 [2024-11-25 13:05:51.402004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.527 [2024-11-25 13:05:51.402010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.527 [2024-11-25 13:05:51.402023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.527 qpair failed and we were unable to recover it. 00:31:11.527 [2024-11-25 13:05:51.411988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.527 [2024-11-25 13:05:51.412061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.527 [2024-11-25 13:05:51.412075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.527 [2024-11-25 13:05:51.412082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.527 [2024-11-25 13:05:51.412092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.527 [2024-11-25 13:05:51.412106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.527 qpair failed and we were unable to recover it. 00:31:11.527 [2024-11-25 13:05:51.422006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.527 [2024-11-25 13:05:51.422109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.527 [2024-11-25 13:05:51.422126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.527 [2024-11-25 13:05:51.422134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.527 [2024-11-25 13:05:51.422140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.527 [2024-11-25 13:05:51.422153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.527 qpair failed and we were unable to recover it. 00:31:11.790 [2024-11-25 13:05:51.432016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.790 [2024-11-25 13:05:51.432075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.790 [2024-11-25 13:05:51.432089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.790 [2024-11-25 13:05:51.432097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.790 [2024-11-25 13:05:51.432103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.790 [2024-11-25 13:05:51.432116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-11-25 13:05:51.442042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.790 [2024-11-25 13:05:51.442094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.790 [2024-11-25 13:05:51.442108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.790 [2024-11-25 13:05:51.442115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.790 [2024-11-25 13:05:51.442121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.790 [2024-11-25 13:05:51.442134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-11-25 13:05:51.452090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.790 [2024-11-25 13:05:51.452150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.790 [2024-11-25 13:05:51.452164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.790 [2024-11-25 13:05:51.452171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.790 [2024-11-25 13:05:51.452177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.790 [2024-11-25 13:05:51.452191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-11-25 13:05:51.462112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.790 [2024-11-25 13:05:51.462184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.790 [2024-11-25 13:05:51.462197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.790 [2024-11-25 13:05:51.462204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.790 [2024-11-25 13:05:51.462214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.790 [2024-11-25 13:05:51.462228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-11-25 13:05:51.472137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.790 [2024-11-25 13:05:51.472199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.790 [2024-11-25 13:05:51.472212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.790 [2024-11-25 13:05:51.472219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.790 [2024-11-25 13:05:51.472226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.790 [2024-11-25 13:05:51.472240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-11-25 13:05:51.482159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.790 [2024-11-25 13:05:51.482209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.790 [2024-11-25 13:05:51.482224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.790 [2024-11-25 13:05:51.482232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.790 [2024-11-25 13:05:51.482238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.790 [2024-11-25 13:05:51.482251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-11-25 13:05:51.492158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.790 [2024-11-25 13:05:51.492224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.790 [2024-11-25 13:05:51.492237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.790 [2024-11-25 13:05:51.492244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.790 [2024-11-25 13:05:51.492251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.790 [2024-11-25 13:05:51.492266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-11-25 13:05:51.502186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.790 [2024-11-25 13:05:51.502244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.790 [2024-11-25 13:05:51.502257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.502264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.502271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.502284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.512121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.512181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.512196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.512203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.512209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.512223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.522260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.522317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.522331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.522338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.522345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.522359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.532282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.532339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.532352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.532359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.532365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.532379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.542309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.542363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.542375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.542382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.542389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.542402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.552347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.552398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.552414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.552421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.552427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.552441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.562411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.562483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.562497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.562504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.562511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.562524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.572409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.572460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.572473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.572480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.572486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.572499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.582430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.582484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.582497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.582504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.582511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.582524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.592482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.592577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.592591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.592598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.592607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.592622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.602481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.602533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.602547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.602554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.602560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.602573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.612522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.612579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.612592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.612599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.612605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.612618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.622538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.791 [2024-11-25 13:05:51.622595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.791 [2024-11-25 13:05:51.622608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.791 [2024-11-25 13:05:51.622615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.791 [2024-11-25 13:05:51.622621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.791 [2024-11-25 13:05:51.622635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-11-25 13:05:51.632581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.792 [2024-11-25 13:05:51.632643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.792 [2024-11-25 13:05:51.632656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.792 [2024-11-25 13:05:51.632664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.792 [2024-11-25 13:05:51.632670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.792 [2024-11-25 13:05:51.632683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-11-25 13:05:51.642470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.792 [2024-11-25 13:05:51.642527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.792 [2024-11-25 13:05:51.642541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.792 [2024-11-25 13:05:51.642548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.792 [2024-11-25 13:05:51.642554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.792 [2024-11-25 13:05:51.642567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-11-25 13:05:51.652632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.792 [2024-11-25 13:05:51.652685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.792 [2024-11-25 13:05:51.652699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.792 [2024-11-25 13:05:51.652706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.792 [2024-11-25 13:05:51.652713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.792 [2024-11-25 13:05:51.652726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-11-25 13:05:51.662620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.792 [2024-11-25 13:05:51.662678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.792 [2024-11-25 13:05:51.662703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.792 [2024-11-25 13:05:51.662712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.792 [2024-11-25 13:05:51.662719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.792 [2024-11-25 13:05:51.662738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-11-25 13:05:51.672682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.792 [2024-11-25 13:05:51.672789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.792 [2024-11-25 13:05:51.672804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.792 [2024-11-25 13:05:51.672812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.792 [2024-11-25 13:05:51.672818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.792 [2024-11-25 13:05:51.672833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-11-25 13:05:51.682727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:11.792 [2024-11-25 13:05:51.682815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:11.792 [2024-11-25 13:05:51.682834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:11.792 [2024-11-25 13:05:51.682841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:11.792 [2024-11-25 13:05:51.682848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:11.792 [2024-11-25 13:05:51.682866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:11.792 qpair failed and we were unable to recover it. 00:31:12.055 [2024-11-25 13:05:51.692716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.055 [2024-11-25 13:05:51.692779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.055 [2024-11-25 13:05:51.692795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.055 [2024-11-25 13:05:51.692802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.055 [2024-11-25 13:05:51.692809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.055 [2024-11-25 13:05:51.692826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-11-25 13:05:51.702731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.055 [2024-11-25 13:05:51.702791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.055 [2024-11-25 13:05:51.702805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.055 [2024-11-25 13:05:51.702812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.055 [2024-11-25 13:05:51.702818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.055 [2024-11-25 13:05:51.702832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-11-25 13:05:51.712770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.055 [2024-11-25 13:05:51.712827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.055 [2024-11-25 13:05:51.712842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.055 [2024-11-25 13:05:51.712850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.055 [2024-11-25 13:05:51.712856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.055 [2024-11-25 13:05:51.712875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-11-25 13:05:51.722788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.055 [2024-11-25 13:05:51.722839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.055 [2024-11-25 13:05:51.722852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.055 [2024-11-25 13:05:51.722859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.055 [2024-11-25 13:05:51.722873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.055 [2024-11-25 13:05:51.722887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-11-25 13:05:51.732848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.055 [2024-11-25 13:05:51.732903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.055 [2024-11-25 13:05:51.732916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.055 [2024-11-25 13:05:51.732924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.055 [2024-11-25 13:05:51.732930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.055 [2024-11-25 13:05:51.732944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-11-25 13:05:51.742887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.055 [2024-11-25 13:05:51.742952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.055 [2024-11-25 13:05:51.742965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.055 [2024-11-25 13:05:51.742973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.055 [2024-11-25 13:05:51.742979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.055 [2024-11-25 13:05:51.742993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-11-25 13:05:51.752914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.055 [2024-11-25 13:05:51.752966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.055 [2024-11-25 13:05:51.752979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.055 [2024-11-25 13:05:51.752986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.055 [2024-11-25 13:05:51.752993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.055 [2024-11-25 13:05:51.753006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-11-25 13:05:51.762882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.055 [2024-11-25 13:05:51.762953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.055 [2024-11-25 13:05:51.762967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.055 [2024-11-25 13:05:51.762974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.055 [2024-11-25 13:05:51.762980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.055 [2024-11-25 13:05:51.762994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-11-25 13:05:51.772938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.055 [2024-11-25 13:05:51.772997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.055 [2024-11-25 13:05:51.773011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.055 [2024-11-25 13:05:51.773019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.055 [2024-11-25 13:05:51.773025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.055 [2024-11-25 13:05:51.773039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.055 qpair failed and we were unable to recover it. 00:31:12.055 [2024-11-25 13:05:51.782978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.055 [2024-11-25 13:05:51.783034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.055 [2024-11-25 13:05:51.783047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.056 [2024-11-25 13:05:51.783054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.056 [2024-11-25 13:05:51.783061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.056 [2024-11-25 13:05:51.783074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-11-25 13:05:51.792982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.056 [2024-11-25 13:05:51.793071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.056 [2024-11-25 13:05:51.793084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.056 [2024-11-25 13:05:51.793092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.056 [2024-11-25 13:05:51.793098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.056 [2024-11-25 13:05:51.793112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-11-25 13:05:51.802902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.056 [2024-11-25 13:05:51.802962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.056 [2024-11-25 13:05:51.802975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.056 [2024-11-25 13:05:51.802982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.056 [2024-11-25 13:05:51.802988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.056 [2024-11-25 13:05:51.803003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-11-25 13:05:51.813036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.056 [2024-11-25 13:05:51.813088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.056 [2024-11-25 13:05:51.813105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.056 [2024-11-25 13:05:51.813112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.056 [2024-11-25 13:05:51.813119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.056 [2024-11-25 13:05:51.813132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-11-25 13:05:51.823080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.056 [2024-11-25 13:05:51.823135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.056 [2024-11-25 13:05:51.823148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.056 [2024-11-25 13:05:51.823156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.056 [2024-11-25 13:05:51.823162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.056 [2024-11-25 13:05:51.823175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-11-25 13:05:51.833097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.056 [2024-11-25 13:05:51.833158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.056 [2024-11-25 13:05:51.833171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.056 [2024-11-25 13:05:51.833178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.056 [2024-11-25 13:05:51.833184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.056 [2024-11-25 13:05:51.833198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-11-25 13:05:51.843018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.056 [2024-11-25 13:05:51.843074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.056 [2024-11-25 13:05:51.843087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.056 [2024-11-25 13:05:51.843095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.056 [2024-11-25 13:05:51.843102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.056 [2024-11-25 13:05:51.843115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-11-25 13:05:51.853144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.056 [2024-11-25 13:05:51.853196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.056 [2024-11-25 13:05:51.853209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.056 [2024-11-25 13:05:51.853217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.056 [2024-11-25 13:05:51.853227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.056 [2024-11-25 13:05:51.853240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-11-25 13:05:51.863205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.056 [2024-11-25 13:05:51.863268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.056 [2024-11-25 13:05:51.863283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.056 [2024-11-25 13:05:51.863290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.056 [2024-11-25 13:05:51.863300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.056 [2024-11-25 13:05:51.863314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-11-25 13:05:51.873230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.056 [2024-11-25 13:05:51.873287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.056 [2024-11-25 13:05:51.873301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.056 [2024-11-25 13:05:51.873308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.056 [2024-11-25 13:05:51.873315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.056 [2024-11-25 13:05:51.873328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.056 qpair failed and we were unable to recover it. 00:31:12.056 [2024-11-25 13:05:51.883281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.056 [2024-11-25 13:05:51.883339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.056 [2024-11-25 13:05:51.883353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.056 [2024-11-25 13:05:51.883360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.056 [2024-11-25 13:05:51.883367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.056 [2024-11-25 13:05:51.883380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-11-25 13:05:51.893274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.057 [2024-11-25 13:05:51.893332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.057 [2024-11-25 13:05:51.893345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.057 [2024-11-25 13:05:51.893353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.057 [2024-11-25 13:05:51.893359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.057 [2024-11-25 13:05:51.893373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-11-25 13:05:51.903312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.057 [2024-11-25 13:05:51.903386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.057 [2024-11-25 13:05:51.903399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.057 [2024-11-25 13:05:51.903407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.057 [2024-11-25 13:05:51.903413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.057 [2024-11-25 13:05:51.903429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-11-25 13:05:51.913344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.057 [2024-11-25 13:05:51.913400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.057 [2024-11-25 13:05:51.913414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.057 [2024-11-25 13:05:51.913421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.057 [2024-11-25 13:05:51.913427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.057 [2024-11-25 13:05:51.913441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-11-25 13:05:51.923362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.057 [2024-11-25 13:05:51.923411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.057 [2024-11-25 13:05:51.923424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.057 [2024-11-25 13:05:51.923432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.057 [2024-11-25 13:05:51.923438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.057 [2024-11-25 13:05:51.923452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-11-25 13:05:51.933356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.057 [2024-11-25 13:05:51.933417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.057 [2024-11-25 13:05:51.933431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.057 [2024-11-25 13:05:51.933439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.057 [2024-11-25 13:05:51.933446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.057 [2024-11-25 13:05:51.933461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-11-25 13:05:51.943341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.057 [2024-11-25 13:05:51.943406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.057 [2024-11-25 13:05:51.943422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.057 [2024-11-25 13:05:51.943430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.057 [2024-11-25 13:05:51.943437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.057 [2024-11-25 13:05:51.943450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.057 [2024-11-25 13:05:51.953330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.057 [2024-11-25 13:05:51.953395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.057 [2024-11-25 13:05:51.953408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.057 [2024-11-25 13:05:51.953415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.057 [2024-11-25 13:05:51.953422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.057 [2024-11-25 13:05:51.953435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.057 qpair failed and we were unable to recover it. 00:31:12.320 [2024-11-25 13:05:51.963501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.320 [2024-11-25 13:05:51.963557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.320 [2024-11-25 13:05:51.963570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.320 [2024-11-25 13:05:51.963577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.320 [2024-11-25 13:05:51.963584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.320 [2024-11-25 13:05:51.963597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.320 qpair failed and we were unable to recover it. 00:31:12.320 [2024-11-25 13:05:51.973365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.320 [2024-11-25 13:05:51.973416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.320 [2024-11-25 13:05:51.973430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.320 [2024-11-25 13:05:51.973438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.320 [2024-11-25 13:05:51.973444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.320 [2024-11-25 13:05:51.973458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.320 qpair failed and we were unable to recover it. 00:31:12.320 [2024-11-25 13:05:51.983405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.320 [2024-11-25 13:05:51.983536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.320 [2024-11-25 13:05:51.983550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.320 [2024-11-25 13:05:51.983557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.320 [2024-11-25 13:05:51.983567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.320 [2024-11-25 13:05:51.983581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.320 qpair failed and we were unable to recover it. 00:31:12.320 [2024-11-25 13:05:51.993560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.320 [2024-11-25 13:05:51.993651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.320 [2024-11-25 13:05:51.993665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.320 [2024-11-25 13:05:51.993673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.320 [2024-11-25 13:05:51.993679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.320 [2024-11-25 13:05:51.993693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.320 qpair failed and we were unable to recover it. 00:31:12.320 [2024-11-25 13:05:52.003560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.320 [2024-11-25 13:05:52.003614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.320 [2024-11-25 13:05:52.003627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.320 [2024-11-25 13:05:52.003635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.320 [2024-11-25 13:05:52.003643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.320 [2024-11-25 13:05:52.003656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.320 qpair failed and we were unable to recover it. 00:31:12.320 [2024-11-25 13:05:52.013593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.320 [2024-11-25 13:05:52.013652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.320 [2024-11-25 13:05:52.013666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.320 [2024-11-25 13:05:52.013674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.320 [2024-11-25 13:05:52.013681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.320 [2024-11-25 13:05:52.013695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.320 qpair failed and we were unable to recover it. 00:31:12.320 [2024-11-25 13:05:52.023642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.320 [2024-11-25 13:05:52.023695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.321 [2024-11-25 13:05:52.023708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.321 [2024-11-25 13:05:52.023715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.321 [2024-11-25 13:05:52.023722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.321 [2024-11-25 13:05:52.023735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.321 qpair failed and we were unable to recover it. 00:31:12.321 [2024-11-25 13:05:52.033552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.321 [2024-11-25 13:05:52.033605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.321 [2024-11-25 13:05:52.033619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.321 [2024-11-25 13:05:52.033626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.321 [2024-11-25 13:05:52.033633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.321 [2024-11-25 13:05:52.033646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.321 qpair failed and we were unable to recover it. 00:31:12.321 [2024-11-25 13:05:52.043702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.321 [2024-11-25 13:05:52.043757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.321 [2024-11-25 13:05:52.043769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.321 [2024-11-25 13:05:52.043777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.321 [2024-11-25 13:05:52.043783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.321 [2024-11-25 13:05:52.043797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.321 qpair failed and we were unable to recover it. 00:31:12.321 [2024-11-25 13:05:52.053709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.321 [2024-11-25 13:05:52.053784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.321 [2024-11-25 13:05:52.053797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.321 [2024-11-25 13:05:52.053804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.321 [2024-11-25 13:05:52.053811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.321 [2024-11-25 13:05:52.053825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.321 qpair failed and we were unable to recover it. 00:31:12.321 [2024-11-25 13:05:52.063753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.321 [2024-11-25 13:05:52.063809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.321 [2024-11-25 13:05:52.063822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.321 [2024-11-25 13:05:52.063829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.321 [2024-11-25 13:05:52.063835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.321 [2024-11-25 13:05:52.063849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.321 qpair failed and we were unable to recover it. 00:31:12.321 [2024-11-25 13:05:52.073711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.321 [2024-11-25 13:05:52.073793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.321 [2024-11-25 13:05:52.073810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.321 [2024-11-25 13:05:52.073817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.321 [2024-11-25 13:05:52.073824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.321 [2024-11-25 13:05:52.073838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.321 qpair failed and we were unable to recover it. 00:31:12.321 [2024-11-25 13:05:52.083799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.321 [2024-11-25 13:05:52.083894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.321 [2024-11-25 13:05:52.083909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.321 [2024-11-25 13:05:52.083917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.321 [2024-11-25 13:05:52.083923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.321 [2024-11-25 13:05:52.083937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.321 qpair failed and we were unable to recover it. 00:31:12.321 [2024-11-25 13:05:52.093800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.321 [2024-11-25 13:05:52.093876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.321 [2024-11-25 13:05:52.093889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.321 [2024-11-25 13:05:52.093896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.321 [2024-11-25 13:05:52.093903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.321 [2024-11-25 13:05:52.093917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.321 qpair failed and we were unable to recover it. 00:31:12.321 [2024-11-25 13:05:52.103853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.321 [2024-11-25 13:05:52.103914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.321 [2024-11-25 13:05:52.103932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.321 [2024-11-25 13:05:52.103940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.321 [2024-11-25 13:05:52.103946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.321 [2024-11-25 13:05:52.103961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.321 qpair failed and we were unable to recover it. 00:31:12.321 [2024-11-25 13:05:52.113887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.321 [2024-11-25 13:05:52.114075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.321 [2024-11-25 13:05:52.114089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.321 [2024-11-25 13:05:52.114096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.321 [2024-11-25 13:05:52.114106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.321 [2024-11-25 13:05:52.114120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.321 qpair failed and we were unable to recover it. 00:31:12.321 [2024-11-25 13:05:52.123887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.322 [2024-11-25 13:05:52.123955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.322 [2024-11-25 13:05:52.123968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.322 [2024-11-25 13:05:52.123976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.322 [2024-11-25 13:05:52.123982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.322 [2024-11-25 13:05:52.123996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.322 qpair failed and we were unable to recover it. 00:31:12.322 [2024-11-25 13:05:52.133928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.322 [2024-11-25 13:05:52.133979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.322 [2024-11-25 13:05:52.133992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.322 [2024-11-25 13:05:52.134000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.322 [2024-11-25 13:05:52.134006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.322 [2024-11-25 13:05:52.134020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.322 qpair failed and we were unable to recover it. 00:31:12.322 [2024-11-25 13:05:52.143949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.322 [2024-11-25 13:05:52.144048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.322 [2024-11-25 13:05:52.144062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.322 [2024-11-25 13:05:52.144069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.322 [2024-11-25 13:05:52.144076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.322 [2024-11-25 13:05:52.144090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.322 qpair failed and we were unable to recover it. 00:31:12.322 [2024-11-25 13:05:52.154007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.322 [2024-11-25 13:05:52.154066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.322 [2024-11-25 13:05:52.154081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.322 [2024-11-25 13:05:52.154088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.322 [2024-11-25 13:05:52.154095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.322 [2024-11-25 13:05:52.154114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.322 qpair failed and we were unable to recover it. 00:31:12.322 [2024-11-25 13:05:52.164008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.322 [2024-11-25 13:05:52.164099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.322 [2024-11-25 13:05:52.164113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.322 [2024-11-25 13:05:52.164121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.322 [2024-11-25 13:05:52.164127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.322 [2024-11-25 13:05:52.164141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.322 qpair failed and we were unable to recover it. 00:31:12.322 [2024-11-25 13:05:52.174064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.322 [2024-11-25 13:05:52.174119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.322 [2024-11-25 13:05:52.174133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.322 [2024-11-25 13:05:52.174140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.322 [2024-11-25 13:05:52.174147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.322 [2024-11-25 13:05:52.174161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.322 qpair failed and we were unable to recover it. 00:31:12.322 [2024-11-25 13:05:52.184050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.322 [2024-11-25 13:05:52.184131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.322 [2024-11-25 13:05:52.184145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.322 [2024-11-25 13:05:52.184153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.322 [2024-11-25 13:05:52.184161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.322 [2024-11-25 13:05:52.184176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.322 qpair failed and we were unable to recover it. 00:31:12.322 [2024-11-25 13:05:52.194158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.322 [2024-11-25 13:05:52.194217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.322 [2024-11-25 13:05:52.194230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.322 [2024-11-25 13:05:52.194237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.322 [2024-11-25 13:05:52.194244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.322 [2024-11-25 13:05:52.194259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.322 qpair failed and we were unable to recover it. 00:31:12.322 [2024-11-25 13:05:52.204160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.322 [2024-11-25 13:05:52.204226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.322 [2024-11-25 13:05:52.204243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.322 [2024-11-25 13:05:52.204250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.322 [2024-11-25 13:05:52.204257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.322 [2024-11-25 13:05:52.204271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.322 qpair failed and we were unable to recover it. 00:31:12.322 [2024-11-25 13:05:52.214044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.322 [2024-11-25 13:05:52.214111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.322 [2024-11-25 13:05:52.214124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.322 [2024-11-25 13:05:52.214131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.322 [2024-11-25 13:05:52.214138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.322 [2024-11-25 13:05:52.214151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.322 qpair failed and we were unable to recover it. 00:31:12.585 [2024-11-25 13:05:52.224224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.585 [2024-11-25 13:05:52.224289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.585 [2024-11-25 13:05:52.224302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.585 [2024-11-25 13:05:52.224309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.585 [2024-11-25 13:05:52.224316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.585 [2024-11-25 13:05:52.224329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-11-25 13:05:52.234108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.585 [2024-11-25 13:05:52.234173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.585 [2024-11-25 13:05:52.234186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.585 [2024-11-25 13:05:52.234193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.585 [2024-11-25 13:05:52.234200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.585 [2024-11-25 13:05:52.234213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-11-25 13:05:52.244240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.585 [2024-11-25 13:05:52.244292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.585 [2024-11-25 13:05:52.244305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.585 [2024-11-25 13:05:52.244312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.585 [2024-11-25 13:05:52.244323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.585 [2024-11-25 13:05:52.244337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-11-25 13:05:52.254276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.585 [2024-11-25 13:05:52.254328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.585 [2024-11-25 13:05:52.254341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.585 [2024-11-25 13:05:52.254348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.585 [2024-11-25 13:05:52.254355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.585 [2024-11-25 13:05:52.254368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-11-25 13:05:52.264300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.585 [2024-11-25 13:05:52.264394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.585 [2024-11-25 13:05:52.264408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.585 [2024-11-25 13:05:52.264415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.585 [2024-11-25 13:05:52.264422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.585 [2024-11-25 13:05:52.264436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.585 qpair failed and we were unable to recover it. 00:31:12.585 [2024-11-25 13:05:52.274346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.585 [2024-11-25 13:05:52.274404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.585 [2024-11-25 13:05:52.274417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.585 [2024-11-25 13:05:52.274424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.585 [2024-11-25 13:05:52.274431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.586 [2024-11-25 13:05:52.274444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-11-25 13:05:52.284380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.586 [2024-11-25 13:05:52.284433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.586 [2024-11-25 13:05:52.284447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.586 [2024-11-25 13:05:52.284454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.586 [2024-11-25 13:05:52.284461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.586 [2024-11-25 13:05:52.284474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-11-25 13:05:52.294395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.586 [2024-11-25 13:05:52.294445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.586 [2024-11-25 13:05:52.294458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.586 [2024-11-25 13:05:52.294465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.586 [2024-11-25 13:05:52.294472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.586 [2024-11-25 13:05:52.294486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-11-25 13:05:52.304452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.586 [2024-11-25 13:05:52.304503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.586 [2024-11-25 13:05:52.304516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.586 [2024-11-25 13:05:52.304524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.586 [2024-11-25 13:05:52.304530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.586 [2024-11-25 13:05:52.304543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-11-25 13:05:52.314461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.586 [2024-11-25 13:05:52.314514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.586 [2024-11-25 13:05:52.314527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.586 [2024-11-25 13:05:52.314535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.586 [2024-11-25 13:05:52.314541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.586 [2024-11-25 13:05:52.314555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-11-25 13:05:52.324480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.586 [2024-11-25 13:05:52.324535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.586 [2024-11-25 13:05:52.324548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.586 [2024-11-25 13:05:52.324555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.586 [2024-11-25 13:05:52.324562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.586 [2024-11-25 13:05:52.324575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-11-25 13:05:52.334558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.586 [2024-11-25 13:05:52.334613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.586 [2024-11-25 13:05:52.334630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.586 [2024-11-25 13:05:52.334637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.586 [2024-11-25 13:05:52.334644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.586 [2024-11-25 13:05:52.334658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-11-25 13:05:52.344556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.586 [2024-11-25 13:05:52.344649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.586 [2024-11-25 13:05:52.344674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.586 [2024-11-25 13:05:52.344683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.586 [2024-11-25 13:05:52.344691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.586 [2024-11-25 13:05:52.344710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-11-25 13:05:52.354577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.586 [2024-11-25 13:05:52.354637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.586 [2024-11-25 13:05:52.354662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.586 [2024-11-25 13:05:52.354671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.586 [2024-11-25 13:05:52.354679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.586 [2024-11-25 13:05:52.354699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-11-25 13:05:52.364488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.586 [2024-11-25 13:05:52.364550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.586 [2024-11-25 13:05:52.364575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.586 [2024-11-25 13:05:52.364584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.586 [2024-11-25 13:05:52.364592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.586 [2024-11-25 13:05:52.364612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-11-25 13:05:52.374627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.586 [2024-11-25 13:05:52.374688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.586 [2024-11-25 13:05:52.374713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.586 [2024-11-25 13:05:52.374722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.586 [2024-11-25 13:05:52.374734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.586 [2024-11-25 13:05:52.374754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.586 qpair failed and we were unable to recover it. 00:31:12.586 [2024-11-25 13:05:52.384643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.587 [2024-11-25 13:05:52.384700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.587 [2024-11-25 13:05:52.384716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.587 [2024-11-25 13:05:52.384724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.587 [2024-11-25 13:05:52.384730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.587 [2024-11-25 13:05:52.384745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-11-25 13:05:52.394703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.587 [2024-11-25 13:05:52.394763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.587 [2024-11-25 13:05:52.394777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.587 [2024-11-25 13:05:52.394784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.587 [2024-11-25 13:05:52.394791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.587 [2024-11-25 13:05:52.394806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-11-25 13:05:52.404659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.587 [2024-11-25 13:05:52.404704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.587 [2024-11-25 13:05:52.404718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.587 [2024-11-25 13:05:52.404725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.587 [2024-11-25 13:05:52.404732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.587 [2024-11-25 13:05:52.404746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-11-25 13:05:52.414700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.587 [2024-11-25 13:05:52.414807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.587 [2024-11-25 13:05:52.414821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.587 [2024-11-25 13:05:52.414829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.587 [2024-11-25 13:05:52.414836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.587 [2024-11-25 13:05:52.414849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-11-25 13:05:52.424771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.587 [2024-11-25 13:05:52.424829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.587 [2024-11-25 13:05:52.424842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.587 [2024-11-25 13:05:52.424849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.587 [2024-11-25 13:05:52.424856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.587 [2024-11-25 13:05:52.424875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-11-25 13:05:52.434796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.587 [2024-11-25 13:05:52.434851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.587 [2024-11-25 13:05:52.434869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.587 [2024-11-25 13:05:52.434876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.587 [2024-11-25 13:05:52.434883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.587 [2024-11-25 13:05:52.434897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-11-25 13:05:52.444787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.587 [2024-11-25 13:05:52.444832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.587 [2024-11-25 13:05:52.444845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.587 [2024-11-25 13:05:52.444853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.587 [2024-11-25 13:05:52.444860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.587 [2024-11-25 13:05:52.444878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-11-25 13:05:52.454853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.587 [2024-11-25 13:05:52.454907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.587 [2024-11-25 13:05:52.454920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.587 [2024-11-25 13:05:52.454927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.587 [2024-11-25 13:05:52.454934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.587 [2024-11-25 13:05:52.454948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-11-25 13:05:52.464878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.587 [2024-11-25 13:05:52.464936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.587 [2024-11-25 13:05:52.464953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.587 [2024-11-25 13:05:52.464960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.587 [2024-11-25 13:05:52.464967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.587 [2024-11-25 13:05:52.464981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-11-25 13:05:52.474801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.587 [2024-11-25 13:05:52.474858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.587 [2024-11-25 13:05:52.474877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.587 [2024-11-25 13:05:52.474885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.587 [2024-11-25 13:05:52.474891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.587 [2024-11-25 13:05:52.474906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.587 qpair failed and we were unable to recover it. 00:31:12.587 [2024-11-25 13:05:52.484768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.587 [2024-11-25 13:05:52.484813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.588 [2024-11-25 13:05:52.484827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.588 [2024-11-25 13:05:52.484835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.588 [2024-11-25 13:05:52.484842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.588 [2024-11-25 13:05:52.484856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.588 qpair failed and we were unable to recover it. 00:31:12.851 [2024-11-25 13:05:52.494955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.851 [2024-11-25 13:05:52.495006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.851 [2024-11-25 13:05:52.495020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.851 [2024-11-25 13:05:52.495028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.851 [2024-11-25 13:05:52.495034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.851 [2024-11-25 13:05:52.495049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.851 qpair failed and we were unable to recover it. 00:31:12.851 [2024-11-25 13:05:52.505022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.851 [2024-11-25 13:05:52.505078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.851 [2024-11-25 13:05:52.505092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.851 [2024-11-25 13:05:52.505099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.851 [2024-11-25 13:05:52.505109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.851 [2024-11-25 13:05:52.505123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.851 qpair failed and we were unable to recover it. 00:31:12.851 [2024-11-25 13:05:52.515027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.851 [2024-11-25 13:05:52.515080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.851 [2024-11-25 13:05:52.515093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.851 [2024-11-25 13:05:52.515100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.851 [2024-11-25 13:05:52.515107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.851 [2024-11-25 13:05:52.515121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.851 qpair failed and we were unable to recover it. 00:31:12.851 [2024-11-25 13:05:52.524970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.851 [2024-11-25 13:05:52.525021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.851 [2024-11-25 13:05:52.525035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.851 [2024-11-25 13:05:52.525042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.851 [2024-11-25 13:05:52.525048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.851 [2024-11-25 13:05:52.525062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.851 qpair failed and we were unable to recover it. 00:31:12.851 [2024-11-25 13:05:52.535102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.851 [2024-11-25 13:05:52.535160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.851 [2024-11-25 13:05:52.535174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.851 [2024-11-25 13:05:52.535181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.851 [2024-11-25 13:05:52.535188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.851 [2024-11-25 13:05:52.535201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.851 qpair failed and we were unable to recover it. 00:31:12.851 [2024-11-25 13:05:52.545118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.851 [2024-11-25 13:05:52.545174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.851 [2024-11-25 13:05:52.545187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.851 [2024-11-25 13:05:52.545194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.851 [2024-11-25 13:05:52.545201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.851 [2024-11-25 13:05:52.545214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.851 qpair failed and we were unable to recover it. 00:31:12.851 [2024-11-25 13:05:52.555055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.851 [2024-11-25 13:05:52.555107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.851 [2024-11-25 13:05:52.555121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.851 [2024-11-25 13:05:52.555128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.851 [2024-11-25 13:05:52.555134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.851 [2024-11-25 13:05:52.555148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.851 qpair failed and we were unable to recover it. 00:31:12.851 [2024-11-25 13:05:52.564992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.851 [2024-11-25 13:05:52.565044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.851 [2024-11-25 13:05:52.565057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.851 [2024-11-25 13:05:52.565064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.851 [2024-11-25 13:05:52.565071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.851 [2024-11-25 13:05:52.565084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.851 qpair failed and we were unable to recover it. 00:31:12.851 [2024-11-25 13:05:52.575140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.851 [2024-11-25 13:05:52.575189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.851 [2024-11-25 13:05:52.575202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.851 [2024-11-25 13:05:52.575209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.851 [2024-11-25 13:05:52.575216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.851 [2024-11-25 13:05:52.575229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.851 qpair failed and we were unable to recover it. 00:31:12.851 [2024-11-25 13:05:52.585219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.851 [2024-11-25 13:05:52.585273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.851 [2024-11-25 13:05:52.585287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.851 [2024-11-25 13:05:52.585294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.851 [2024-11-25 13:05:52.585301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.851 [2024-11-25 13:05:52.585315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.851 qpair failed and we were unable to recover it. 00:31:12.851 [2024-11-25 13:05:52.595137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.595190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.595208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.595216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.595223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.595238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.605226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.605273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.605287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.605295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.605301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.605315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.615275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.615359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.615372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.615381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.615388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.615402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.625283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.625338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.625352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.625359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.625366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.625380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.635346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.635457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.635471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.635479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.635489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.635503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.645327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.645378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.645391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.645398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.645405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.645418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.655373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.655422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.655435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.655442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.655449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.655463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.665463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.665541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.665554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.665561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.665569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.665582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.675466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.675571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.675586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.675594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.675601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.675615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.685441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.685492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.685505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.685512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.685519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.685533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.695471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.695527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.695541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.695548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.695555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.695569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.705532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.705596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.705621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.705630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.705638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.705658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.715442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.715498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.715513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.715521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.715528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.715543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.725547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.725602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.725622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.725631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.725640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.725655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.735581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.735629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.735646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.735653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.735660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.735675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:12.852 [2024-11-25 13:05:52.745643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:12.852 [2024-11-25 13:05:52.745705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:12.852 [2024-11-25 13:05:52.745730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:12.852 [2024-11-25 13:05:52.745739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:12.852 [2024-11-25 13:05:52.745746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:12.852 [2024-11-25 13:05:52.745767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:12.852 qpair failed and we were unable to recover it. 00:31:13.115 [2024-11-25 13:05:52.755671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.115 [2024-11-25 13:05:52.755725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.115 [2024-11-25 13:05:52.755742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.115 [2024-11-25 13:05:52.755749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.115 [2024-11-25 13:05:52.755756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.115 [2024-11-25 13:05:52.755771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.115 qpair failed and we were unable to recover it. 00:31:13.115 [2024-11-25 13:05:52.765623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.115 [2024-11-25 13:05:52.765670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.115 [2024-11-25 13:05:52.765684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.115 [2024-11-25 13:05:52.765692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.115 [2024-11-25 13:05:52.765703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.115 [2024-11-25 13:05:52.765717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.115 qpair failed and we were unable to recover it. 00:31:13.115 [2024-11-25 13:05:52.775627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.115 [2024-11-25 13:05:52.775676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.115 [2024-11-25 13:05:52.775689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.115 [2024-11-25 13:05:52.775697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.115 [2024-11-25 13:05:52.775703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.115 [2024-11-25 13:05:52.775717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.115 qpair failed and we were unable to recover it. 00:31:13.115 [2024-11-25 13:05:52.785739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.115 [2024-11-25 13:05:52.785797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.115 [2024-11-25 13:05:52.785811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.115 [2024-11-25 13:05:52.785818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.115 [2024-11-25 13:05:52.785825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.115 [2024-11-25 13:05:52.785838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.115 qpair failed and we were unable to recover it. 00:31:13.115 [2024-11-25 13:05:52.795762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.115 [2024-11-25 13:05:52.795817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.115 [2024-11-25 13:05:52.795831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.115 [2024-11-25 13:05:52.795838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.115 [2024-11-25 13:05:52.795845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.115 [2024-11-25 13:05:52.795858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.115 qpair failed and we were unable to recover it. 00:31:13.115 [2024-11-25 13:05:52.805635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.115 [2024-11-25 13:05:52.805703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.115 [2024-11-25 13:05:52.805717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.115 [2024-11-25 13:05:52.805724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.115 [2024-11-25 13:05:52.805731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.116 [2024-11-25 13:05:52.805745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.116 qpair failed and we were unable to recover it. 00:31:13.116 [2024-11-25 13:05:52.815785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.116 [2024-11-25 13:05:52.815835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.116 [2024-11-25 13:05:52.815850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.116 [2024-11-25 13:05:52.815857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.116 [2024-11-25 13:05:52.815868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.116 [2024-11-25 13:05:52.815882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.116 qpair failed and we were unable to recover it. 00:31:13.116 [2024-11-25 13:05:52.825851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.116 [2024-11-25 13:05:52.825910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.116 [2024-11-25 13:05:52.825924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.116 [2024-11-25 13:05:52.825931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.116 [2024-11-25 13:05:52.825938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.116 [2024-11-25 13:05:52.825952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.116 qpair failed and we were unable to recover it. 00:31:13.116 [2024-11-25 13:05:52.835900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.116 [2024-11-25 13:05:52.835956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.116 [2024-11-25 13:05:52.835970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.116 [2024-11-25 13:05:52.835977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.116 [2024-11-25 13:05:52.835984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.116 [2024-11-25 13:05:52.835998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.116 qpair failed and we were unable to recover it. 00:31:13.116 [2024-11-25 13:05:52.845871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.116 [2024-11-25 13:05:52.845919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.116 [2024-11-25 13:05:52.845932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.116 [2024-11-25 13:05:52.845939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.116 [2024-11-25 13:05:52.845946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.116 [2024-11-25 13:05:52.845960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.116 qpair failed and we were unable to recover it. 00:31:13.116 [2024-11-25 13:05:52.855904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.116 [2024-11-25 13:05:52.855959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.116 [2024-11-25 13:05:52.855975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.116 [2024-11-25 13:05:52.855983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.116 [2024-11-25 13:05:52.855989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.116 [2024-11-25 13:05:52.856003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.116 qpair failed and we were unable to recover it. 00:31:13.116 [2024-11-25 13:05:52.865964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.116 [2024-11-25 13:05:52.866019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.116 [2024-11-25 13:05:52.866033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.116 [2024-11-25 13:05:52.866040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.116 [2024-11-25 13:05:52.866047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.116 [2024-11-25 13:05:52.866061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.116 qpair failed and we were unable to recover it. 00:31:13.116 [2024-11-25 13:05:52.875969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.116 [2024-11-25 13:05:52.876069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.116 [2024-11-25 13:05:52.876083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.116 [2024-11-25 13:05:52.876090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.116 [2024-11-25 13:05:52.876097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.116 [2024-11-25 13:05:52.876111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.116 qpair failed and we were unable to recover it. 00:31:13.116 [2024-11-25 13:05:52.885948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.116 [2024-11-25 13:05:52.886003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.116 [2024-11-25 13:05:52.886017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.116 [2024-11-25 13:05:52.886024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.116 [2024-11-25 13:05:52.886031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.116 [2024-11-25 13:05:52.886045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.116 qpair failed and we were unable to recover it. 00:31:13.116 [2024-11-25 13:05:52.896011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.116 [2024-11-25 13:05:52.896063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.116 [2024-11-25 13:05:52.896076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.116 [2024-11-25 13:05:52.896084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.116 [2024-11-25 13:05:52.896095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.116 [2024-11-25 13:05:52.896109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.116 qpair failed and we were unable to recover it. 00:31:13.116 [2024-11-25 13:05:52.905954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.116 [2024-11-25 13:05:52.906009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.116 [2024-11-25 13:05:52.906024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.116 [2024-11-25 13:05:52.906031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.116 [2024-11-25 13:05:52.906038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.116 [2024-11-25 13:05:52.906051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.116 qpair failed and we were unable to recover it. 00:31:13.116 [2024-11-25 13:05:52.916115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.116 [2024-11-25 13:05:52.916195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.116 [2024-11-25 13:05:52.916208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.117 [2024-11-25 13:05:52.916216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.117 [2024-11-25 13:05:52.916223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.117 [2024-11-25 13:05:52.916236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.117 qpair failed and we were unable to recover it. 00:31:13.117 [2024-11-25 13:05:52.925965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.117 [2024-11-25 13:05:52.926017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.117 [2024-11-25 13:05:52.926030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.117 [2024-11-25 13:05:52.926037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.117 [2024-11-25 13:05:52.926044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.117 [2024-11-25 13:05:52.926058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.117 qpair failed and we were unable to recover it. 00:31:13.117 [2024-11-25 13:05:52.935986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.117 [2024-11-25 13:05:52.936033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.117 [2024-11-25 13:05:52.936047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.117 [2024-11-25 13:05:52.936054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.117 [2024-11-25 13:05:52.936061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.117 [2024-11-25 13:05:52.936074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.117 qpair failed and we were unable to recover it. 00:31:13.117 [2024-11-25 13:05:52.946240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.117 [2024-11-25 13:05:52.946298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.117 [2024-11-25 13:05:52.946311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.117 [2024-11-25 13:05:52.946319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.117 [2024-11-25 13:05:52.946326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.117 [2024-11-25 13:05:52.946340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.117 qpair failed and we were unable to recover it. 00:31:13.117 [2024-11-25 13:05:52.956239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.117 [2024-11-25 13:05:52.956297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.117 [2024-11-25 13:05:52.956310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.117 [2024-11-25 13:05:52.956317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.117 [2024-11-25 13:05:52.956324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.117 [2024-11-25 13:05:52.956337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.117 qpair failed and we were unable to recover it. 00:31:13.117 [2024-11-25 13:05:52.966190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.117 [2024-11-25 13:05:52.966240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.117 [2024-11-25 13:05:52.966253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.117 [2024-11-25 13:05:52.966261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.117 [2024-11-25 13:05:52.966267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.117 [2024-11-25 13:05:52.966281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.117 qpair failed and we were unable to recover it. 00:31:13.117 [2024-11-25 13:05:52.976202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.117 [2024-11-25 13:05:52.976250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.117 [2024-11-25 13:05:52.976263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.117 [2024-11-25 13:05:52.976270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.117 [2024-11-25 13:05:52.976277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.117 [2024-11-25 13:05:52.976290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.117 qpair failed and we were unable to recover it. 00:31:13.117 [2024-11-25 13:05:52.986294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.117 [2024-11-25 13:05:52.986368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.117 [2024-11-25 13:05:52.986385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.117 [2024-11-25 13:05:52.986392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.117 [2024-11-25 13:05:52.986399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.117 [2024-11-25 13:05:52.986413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.117 qpair failed and we were unable to recover it. 00:31:13.117 [2024-11-25 13:05:52.996322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.117 [2024-11-25 13:05:52.996377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.117 [2024-11-25 13:05:52.996391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.117 [2024-11-25 13:05:52.996398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.117 [2024-11-25 13:05:52.996404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.117 [2024-11-25 13:05:52.996418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.117 qpair failed and we were unable to recover it. 00:31:13.117 [2024-11-25 13:05:53.006191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.117 [2024-11-25 13:05:53.006282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.117 [2024-11-25 13:05:53.006296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.117 [2024-11-25 13:05:53.006304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.117 [2024-11-25 13:05:53.006310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.117 [2024-11-25 13:05:53.006324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.117 qpair failed and we were unable to recover it. 00:31:13.117 [2024-11-25 13:05:53.016221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.117 [2024-11-25 13:05:53.016279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.117 [2024-11-25 13:05:53.016293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.117 [2024-11-25 13:05:53.016300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.117 [2024-11-25 13:05:53.016307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.382 [2024-11-25 13:05:53.016320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.382 qpair failed and we were unable to recover it. 00:31:13.382 [2024-11-25 13:05:53.026268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.382 [2024-11-25 13:05:53.026326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.382 [2024-11-25 13:05:53.026340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.382 [2024-11-25 13:05:53.026347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.382 [2024-11-25 13:05:53.026357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.382 [2024-11-25 13:05:53.026371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.382 qpair failed and we were unable to recover it. 00:31:13.382 [2024-11-25 13:05:53.036432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.382 [2024-11-25 13:05:53.036519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.382 [2024-11-25 13:05:53.036532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.382 [2024-11-25 13:05:53.036540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.382 [2024-11-25 13:05:53.036547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.382 [2024-11-25 13:05:53.036560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.382 qpair failed and we were unable to recover it. 00:31:13.382 [2024-11-25 13:05:53.046388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.382 [2024-11-25 13:05:53.046438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.382 [2024-11-25 13:05:53.046452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.382 [2024-11-25 13:05:53.046459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.382 [2024-11-25 13:05:53.046466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.382 [2024-11-25 13:05:53.046479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.382 qpair failed and we were unable to recover it. 00:31:13.382 [2024-11-25 13:05:53.056472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.382 [2024-11-25 13:05:53.056544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.382 [2024-11-25 13:05:53.056557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.382 [2024-11-25 13:05:53.056564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.382 [2024-11-25 13:05:53.056571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.382 [2024-11-25 13:05:53.056585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.382 qpair failed and we were unable to recover it. 00:31:13.382 [2024-11-25 13:05:53.066492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.382 [2024-11-25 13:05:53.066553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.382 [2024-11-25 13:05:53.066568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.382 [2024-11-25 13:05:53.066575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.382 [2024-11-25 13:05:53.066582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.382 [2024-11-25 13:05:53.066603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.382 qpair failed and we were unable to recover it. 00:31:13.382 [2024-11-25 13:05:53.076495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.382 [2024-11-25 13:05:53.076555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.382 [2024-11-25 13:05:53.076569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.382 [2024-11-25 13:05:53.076577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.383 [2024-11-25 13:05:53.076583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.383 [2024-11-25 13:05:53.076597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.383 qpair failed and we were unable to recover it. 00:31:13.383 [2024-11-25 13:05:53.086490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.383 [2024-11-25 13:05:53.086537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.383 [2024-11-25 13:05:53.086550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.383 [2024-11-25 13:05:53.086558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.383 [2024-11-25 13:05:53.086565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.383 [2024-11-25 13:05:53.086578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.383 qpair failed and we were unable to recover it. 00:31:13.383 [2024-11-25 13:05:53.096539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.383 [2024-11-25 13:05:53.096591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.383 [2024-11-25 13:05:53.096607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.383 [2024-11-25 13:05:53.096614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.383 [2024-11-25 13:05:53.096621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.383 [2024-11-25 13:05:53.096637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.383 qpair failed and we were unable to recover it. 00:31:13.383 [2024-11-25 13:05:53.106514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.383 [2024-11-25 13:05:53.106573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.383 [2024-11-25 13:05:53.106586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.383 [2024-11-25 13:05:53.106594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.383 [2024-11-25 13:05:53.106601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.383 [2024-11-25 13:05:53.106615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.383 qpair failed and we were unable to recover it. 00:31:13.383 [2024-11-25 13:05:53.116647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.383 [2024-11-25 13:05:53.116701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.383 [2024-11-25 13:05:53.116719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.383 [2024-11-25 13:05:53.116726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.383 [2024-11-25 13:05:53.116733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.383 [2024-11-25 13:05:53.116747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.383 qpair failed and we were unable to recover it. 00:31:13.383 [2024-11-25 13:05:53.126563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.383 [2024-11-25 13:05:53.126636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.383 [2024-11-25 13:05:53.126650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.383 [2024-11-25 13:05:53.126657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.383 [2024-11-25 13:05:53.126664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.383 [2024-11-25 13:05:53.126678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.383 qpair failed and we were unable to recover it. 00:31:13.383 [2024-11-25 13:05:53.136653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.383 [2024-11-25 13:05:53.136701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.383 [2024-11-25 13:05:53.136715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.383 [2024-11-25 13:05:53.136722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.383 [2024-11-25 13:05:53.136729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.383 [2024-11-25 13:05:53.136743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.383 qpair failed and we were unable to recover it. 00:31:13.383 [2024-11-25 13:05:53.146687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.383 [2024-11-25 13:05:53.146747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.383 [2024-11-25 13:05:53.146760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.383 [2024-11-25 13:05:53.146768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.383 [2024-11-25 13:05:53.146774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.383 [2024-11-25 13:05:53.146788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.383 qpair failed and we were unable to recover it. 00:31:13.383 [2024-11-25 13:05:53.156723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.383 [2024-11-25 13:05:53.156786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.383 [2024-11-25 13:05:53.156799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.383 [2024-11-25 13:05:53.156807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.383 [2024-11-25 13:05:53.156817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.383 [2024-11-25 13:05:53.156830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.383 qpair failed and we were unable to recover it. 00:31:13.383 [2024-11-25 13:05:53.166723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.383 [2024-11-25 13:05:53.166777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.383 [2024-11-25 13:05:53.166791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.383 [2024-11-25 13:05:53.166798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.383 [2024-11-25 13:05:53.166804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.383 [2024-11-25 13:05:53.166818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.383 qpair failed and we were unable to recover it. 00:31:13.383 [2024-11-25 13:05:53.176764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.383 [2024-11-25 13:05:53.176814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.383 [2024-11-25 13:05:53.176828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.383 [2024-11-25 13:05:53.176835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.384 [2024-11-25 13:05:53.176841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.384 [2024-11-25 13:05:53.176855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.384 qpair failed and we were unable to recover it. 00:31:13.384 [2024-11-25 13:05:53.186834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.384 [2024-11-25 13:05:53.186937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.384 [2024-11-25 13:05:53.186952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.384 [2024-11-25 13:05:53.186960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.384 [2024-11-25 13:05:53.186968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.384 [2024-11-25 13:05:53.186982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.384 qpair failed and we were unable to recover it. 00:31:13.384 [2024-11-25 13:05:53.196855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.384 [2024-11-25 13:05:53.196912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.384 [2024-11-25 13:05:53.196926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.384 [2024-11-25 13:05:53.196934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.384 [2024-11-25 13:05:53.196940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.384 [2024-11-25 13:05:53.196954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.384 qpair failed and we were unable to recover it. 00:31:13.384 [2024-11-25 13:05:53.206782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.384 [2024-11-25 13:05:53.206832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.384 [2024-11-25 13:05:53.206845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.384 [2024-11-25 13:05:53.206853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.384 [2024-11-25 13:05:53.206859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.384 [2024-11-25 13:05:53.206878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.384 qpair failed and we were unable to recover it. 00:31:13.384 [2024-11-25 13:05:53.216869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.384 [2024-11-25 13:05:53.216922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.384 [2024-11-25 13:05:53.216936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.384 [2024-11-25 13:05:53.216943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.384 [2024-11-25 13:05:53.216950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.384 [2024-11-25 13:05:53.216964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.384 qpair failed and we were unable to recover it. 00:31:13.384 [2024-11-25 13:05:53.226871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.384 [2024-11-25 13:05:53.226957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.384 [2024-11-25 13:05:53.226971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.384 [2024-11-25 13:05:53.226978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.384 [2024-11-25 13:05:53.226985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.384 [2024-11-25 13:05:53.226999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.384 qpair failed and we were unable to recover it. 00:31:13.384 [2024-11-25 13:05:53.236983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.384 [2024-11-25 13:05:53.237038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.384 [2024-11-25 13:05:53.237051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.384 [2024-11-25 13:05:53.237059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.384 [2024-11-25 13:05:53.237065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.384 [2024-11-25 13:05:53.237079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.384 qpair failed and we were unable to recover it. 00:31:13.384 [2024-11-25 13:05:53.246954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.384 [2024-11-25 13:05:53.247004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.384 [2024-11-25 13:05:53.247021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.384 [2024-11-25 13:05:53.247028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.384 [2024-11-25 13:05:53.247035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.384 [2024-11-25 13:05:53.247049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.384 qpair failed and we were unable to recover it. 00:31:13.384 [2024-11-25 13:05:53.256966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.384 [2024-11-25 13:05:53.257014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.384 [2024-11-25 13:05:53.257028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.384 [2024-11-25 13:05:53.257035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.384 [2024-11-25 13:05:53.257041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.384 [2024-11-25 13:05:53.257055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.384 qpair failed and we were unable to recover it. 00:31:13.384 [2024-11-25 13:05:53.266907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.384 [2024-11-25 13:05:53.266964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.384 [2024-11-25 13:05:53.266977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.384 [2024-11-25 13:05:53.266984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.384 [2024-11-25 13:05:53.266991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.384 [2024-11-25 13:05:53.267005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.384 qpair failed and we were unable to recover it. 00:31:13.384 [2024-11-25 13:05:53.277045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.384 [2024-11-25 13:05:53.277104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.384 [2024-11-25 13:05:53.277117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.384 [2024-11-25 13:05:53.277124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.385 [2024-11-25 13:05:53.277131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.385 [2024-11-25 13:05:53.277144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.385 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.287035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.287085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.287098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.287105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.648 [2024-11-25 13:05:53.287115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.648 [2024-11-25 13:05:53.287129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.648 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.297096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.297148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.297162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.297169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.648 [2024-11-25 13:05:53.297176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.648 [2024-11-25 13:05:53.297189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.648 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.307168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.307228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.307242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.307249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.648 [2024-11-25 13:05:53.307256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.648 [2024-11-25 13:05:53.307269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.648 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.317178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.317270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.317283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.317291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.648 [2024-11-25 13:05:53.317298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.648 [2024-11-25 13:05:53.317311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.648 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.327141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.327188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.327202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.327209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.648 [2024-11-25 13:05:53.327216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.648 [2024-11-25 13:05:53.327230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.648 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.337052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.337109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.337123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.337130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.648 [2024-11-25 13:05:53.337137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.648 [2024-11-25 13:05:53.337151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.648 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.347254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.347308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.347321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.347329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.648 [2024-11-25 13:05:53.347335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.648 [2024-11-25 13:05:53.347349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.648 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.357197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.357297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.357311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.357318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.648 [2024-11-25 13:05:53.357325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.648 [2024-11-25 13:05:53.357339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.648 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.367241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.367294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.367308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.367315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.648 [2024-11-25 13:05:53.367322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.648 [2024-11-25 13:05:53.367335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.648 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.377280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.377326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.377342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.377350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.648 [2024-11-25 13:05:53.377356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.648 [2024-11-25 13:05:53.377370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.648 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.387231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.387294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.387307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.387314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.648 [2024-11-25 13:05:53.387321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.648 [2024-11-25 13:05:53.387335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.648 qpair failed and we were unable to recover it. 00:31:13.648 [2024-11-25 13:05:53.397385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.648 [2024-11-25 13:05:53.397468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.648 [2024-11-25 13:05:53.397481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.648 [2024-11-25 13:05:53.397489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.397495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.397509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.407370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.407465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.407479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.407486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.407493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.407507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.417399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.417449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.417462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.417470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.417480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.417493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.427462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.427518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.427532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.427539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.427546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.427560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.437510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.437567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.437581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.437588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.437596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.437610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.447517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.447571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.447584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.447592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.447598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.447612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.457496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.457552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.457565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.457573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.457579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.457593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.467553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.467615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.467628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.467636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.467643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.467656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.477604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.477690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.477716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.477725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.477732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.477752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.487588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.487637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.487652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.487660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.487667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.487682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.497610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.497661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.497675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.497683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.497690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.497705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.507677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.507779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.507798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.507805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.507812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.507826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.517705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.517788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.517802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.517810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.517818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.649 [2024-11-25 13:05:53.517832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.649 qpair failed and we were unable to recover it. 00:31:13.649 [2024-11-25 13:05:53.527697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.649 [2024-11-25 13:05:53.527748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.649 [2024-11-25 13:05:53.527762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.649 [2024-11-25 13:05:53.527770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.649 [2024-11-25 13:05:53.527776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.650 [2024-11-25 13:05:53.527790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.650 qpair failed and we were unable to recover it. 00:31:13.650 [2024-11-25 13:05:53.537585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.650 [2024-11-25 13:05:53.537662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.650 [2024-11-25 13:05:53.537677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.650 [2024-11-25 13:05:53.537684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.650 [2024-11-25 13:05:53.537692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.650 [2024-11-25 13:05:53.537706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.650 qpair failed and we were unable to recover it. 00:31:13.650 [2024-11-25 13:05:53.547783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.650 [2024-11-25 13:05:53.547839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.650 [2024-11-25 13:05:53.547853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.650 [2024-11-25 13:05:53.547860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.650 [2024-11-25 13:05:53.547876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.650 [2024-11-25 13:05:53.547891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.650 qpair failed and we were unable to recover it. 00:31:13.911 [2024-11-25 13:05:53.557822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.911 [2024-11-25 13:05:53.557881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.911 [2024-11-25 13:05:53.557895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.911 [2024-11-25 13:05:53.557902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.911 [2024-11-25 13:05:53.557909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.557923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.567664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.567714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.567727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.567735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.567742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.567756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.577825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.577881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.577895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.577902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.577909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.577923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.587904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.587960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.587974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.587981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.587988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.588002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.597916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.597975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.597988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.597995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.598002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.598015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.607891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.607944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.607957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.607965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.607971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.607985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.617900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.617948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.617962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.617969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.617976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.617990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.627997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.628053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.628066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.628073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.628080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.628093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.638017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.638085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.638102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.638110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.638116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.638130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.648021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.648101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.648115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.648123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.648130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.648144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.658054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.658105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.658119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.658127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.658134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.658147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.668104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.668179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.668193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.668200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.668207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.668222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.678148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.912 [2024-11-25 13:05:53.678201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.912 [2024-11-25 13:05:53.678214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.912 [2024-11-25 13:05:53.678222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.912 [2024-11-25 13:05:53.678233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.912 [2024-11-25 13:05:53.678247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.912 qpair failed and we were unable to recover it. 00:31:13.912 [2024-11-25 13:05:53.688169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.688249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.688263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.688272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.688278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.688293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.698157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.698209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.698224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.698232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.698239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.698253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.708221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.708279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.708292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.708299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.708306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.708320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.718240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.718296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.718309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.718317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.718324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.718337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.728110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.728160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.728174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.728181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.728188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.728201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.738259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.738316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.738330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.738338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.738344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.738358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.748342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.748402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.748415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.748423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.748429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.748444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.758349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.758404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.758418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.758425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.758432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.758446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.768329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.768397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.768415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.768423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.768429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.768444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.778369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.778418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.778432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.778439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.778446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.778460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.788419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.788476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.788489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.788496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.788503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.788516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.798489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.798543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.798556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.913 [2024-11-25 13:05:53.798563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.913 [2024-11-25 13:05:53.798570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.913 [2024-11-25 13:05:53.798583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.913 qpair failed and we were unable to recover it. 00:31:13.913 [2024-11-25 13:05:53.808471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:13.913 [2024-11-25 13:05:53.808527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:13.913 [2024-11-25 13:05:53.808541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:13.914 [2024-11-25 13:05:53.808549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:13.914 [2024-11-25 13:05:53.808559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:13.914 [2024-11-25 13:05:53.808573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:13.914 qpair failed and we were unable to recover it. 00:31:14.176 [2024-11-25 13:05:53.818481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.176 [2024-11-25 13:05:53.818536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.176 [2024-11-25 13:05:53.818550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.176 [2024-11-25 13:05:53.818557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.176 [2024-11-25 13:05:53.818564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.176 [2024-11-25 13:05:53.818578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.176 qpair failed and we were unable to recover it. 00:31:14.176 [2024-11-25 13:05:53.828550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.176 [2024-11-25 13:05:53.828655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.176 [2024-11-25 13:05:53.828681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.176 [2024-11-25 13:05:53.828690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.176 [2024-11-25 13:05:53.828697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.176 [2024-11-25 13:05:53.828717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.176 qpair failed and we were unable to recover it. 00:31:14.176 [2024-11-25 13:05:53.838591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.176 [2024-11-25 13:05:53.838657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.176 [2024-11-25 13:05:53.838683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.176 [2024-11-25 13:05:53.838692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.176 [2024-11-25 13:05:53.838700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.176 [2024-11-25 13:05:53.838719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.176 qpair failed and we were unable to recover it. 00:31:14.176 [2024-11-25 13:05:53.848554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.176 [2024-11-25 13:05:53.848649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.176 [2024-11-25 13:05:53.848664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.176 [2024-11-25 13:05:53.848673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.176 [2024-11-25 13:05:53.848680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.176 [2024-11-25 13:05:53.848695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.176 qpair failed and we were unable to recover it. 00:31:14.176 [2024-11-25 13:05:53.858562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.176 [2024-11-25 13:05:53.858615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.176 [2024-11-25 13:05:53.858630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.176 [2024-11-25 13:05:53.858637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.176 [2024-11-25 13:05:53.858644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.176 [2024-11-25 13:05:53.858658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.176 qpair failed and we were unable to recover it. 00:31:14.176 [2024-11-25 13:05:53.868663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.176 [2024-11-25 13:05:53.868719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.176 [2024-11-25 13:05:53.868733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.176 [2024-11-25 13:05:53.868740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.176 [2024-11-25 13:05:53.868747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.176 [2024-11-25 13:05:53.868761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.176 qpair failed and we were unable to recover it. 00:31:14.176 [2024-11-25 13:05:53.878719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.176 [2024-11-25 13:05:53.878774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.176 [2024-11-25 13:05:53.878787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.176 [2024-11-25 13:05:53.878795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.176 [2024-11-25 13:05:53.878801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.176 [2024-11-25 13:05:53.878815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.176 qpair failed and we were unable to recover it. 00:31:14.176 [2024-11-25 13:05:53.888676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.176 [2024-11-25 13:05:53.888725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.176 [2024-11-25 13:05:53.888738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.176 [2024-11-25 13:05:53.888745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.176 [2024-11-25 13:05:53.888752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.176 [2024-11-25 13:05:53.888766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.177 [2024-11-25 13:05:53.898672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.177 [2024-11-25 13:05:53.898722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.177 [2024-11-25 13:05:53.898740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.177 [2024-11-25 13:05:53.898748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.177 [2024-11-25 13:05:53.898755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.177 [2024-11-25 13:05:53.898769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.177 [2024-11-25 13:05:53.908651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.177 [2024-11-25 13:05:53.908748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.177 [2024-11-25 13:05:53.908762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.177 [2024-11-25 13:05:53.908770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.177 [2024-11-25 13:05:53.908776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.177 [2024-11-25 13:05:53.908790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.177 [2024-11-25 13:05:53.918782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.177 [2024-11-25 13:05:53.918874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.177 [2024-11-25 13:05:53.918888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.177 [2024-11-25 13:05:53.918895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.177 [2024-11-25 13:05:53.918903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.177 [2024-11-25 13:05:53.918917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.177 [2024-11-25 13:05:53.928776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.177 [2024-11-25 13:05:53.928830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.177 [2024-11-25 13:05:53.928845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.177 [2024-11-25 13:05:53.928853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.177 [2024-11-25 13:05:53.928860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.177 [2024-11-25 13:05:53.928882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.177 [2024-11-25 13:05:53.938808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.177 [2024-11-25 13:05:53.938854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.177 [2024-11-25 13:05:53.938872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.177 [2024-11-25 13:05:53.938880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.177 [2024-11-25 13:05:53.938891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.177 [2024-11-25 13:05:53.938906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.177 [2024-11-25 13:05:53.948851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.177 [2024-11-25 13:05:53.948918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.177 [2024-11-25 13:05:53.948932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.177 [2024-11-25 13:05:53.948939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.177 [2024-11-25 13:05:53.948946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.177 [2024-11-25 13:05:53.948960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.177 [2024-11-25 13:05:53.958898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.177 [2024-11-25 13:05:53.958952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.177 [2024-11-25 13:05:53.958965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.177 [2024-11-25 13:05:53.958973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.177 [2024-11-25 13:05:53.958979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.177 [2024-11-25 13:05:53.958993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.177 [2024-11-25 13:05:53.968889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.177 [2024-11-25 13:05:53.968942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.177 [2024-11-25 13:05:53.968955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.177 [2024-11-25 13:05:53.968962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.177 [2024-11-25 13:05:53.968969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.177 [2024-11-25 13:05:53.968983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.177 [2024-11-25 13:05:53.978903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.177 [2024-11-25 13:05:53.978948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.177 [2024-11-25 13:05:53.978961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.177 [2024-11-25 13:05:53.978969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.177 [2024-11-25 13:05:53.978975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.177 [2024-11-25 13:05:53.978989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.177 [2024-11-25 13:05:53.988974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.177 [2024-11-25 13:05:53.989029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.177 [2024-11-25 13:05:53.989042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.177 [2024-11-25 13:05:53.989050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.177 [2024-11-25 13:05:53.989056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.177 [2024-11-25 13:05:53.989070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.177 [2024-11-25 13:05:53.998890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.177 [2024-11-25 13:05:53.998947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.177 [2024-11-25 13:05:53.998960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.177 [2024-11-25 13:05:53.998967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.177 [2024-11-25 13:05:53.998974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.177 [2024-11-25 13:05:53.998987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.177 qpair failed and we were unable to recover it. 00:31:14.178 [2024-11-25 13:05:54.008994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.178 [2024-11-25 13:05:54.009045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.178 [2024-11-25 13:05:54.009058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.178 [2024-11-25 13:05:54.009066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.178 [2024-11-25 13:05:54.009072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.178 [2024-11-25 13:05:54.009086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.178 qpair failed and we were unable to recover it. 00:31:14.178 [2024-11-25 13:05:54.019007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.178 [2024-11-25 13:05:54.019060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.178 [2024-11-25 13:05:54.019073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.178 [2024-11-25 13:05:54.019080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.178 [2024-11-25 13:05:54.019087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.178 [2024-11-25 13:05:54.019101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.178 qpair failed and we were unable to recover it. 00:31:14.178 [2024-11-25 13:05:54.029111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.178 [2024-11-25 13:05:54.029165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.178 [2024-11-25 13:05:54.029181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.178 [2024-11-25 13:05:54.029188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.178 [2024-11-25 13:05:54.029195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.178 [2024-11-25 13:05:54.029208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.178 qpair failed and we were unable to recover it. 00:31:14.178 [2024-11-25 13:05:54.039126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.178 [2024-11-25 13:05:54.039185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.178 [2024-11-25 13:05:54.039198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.178 [2024-11-25 13:05:54.039206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.178 [2024-11-25 13:05:54.039212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.178 [2024-11-25 13:05:54.039225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.178 qpair failed and we were unable to recover it. 00:31:14.178 [2024-11-25 13:05:54.049122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.178 [2024-11-25 13:05:54.049220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.178 [2024-11-25 13:05:54.049233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.178 [2024-11-25 13:05:54.049240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.178 [2024-11-25 13:05:54.049247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.178 [2024-11-25 13:05:54.049260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.178 qpair failed and we were unable to recover it. 00:31:14.178 [2024-11-25 13:05:54.059122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.178 [2024-11-25 13:05:54.059180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.178 [2024-11-25 13:05:54.059193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.178 [2024-11-25 13:05:54.059201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.178 [2024-11-25 13:05:54.059207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.178 [2024-11-25 13:05:54.059221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.178 qpair failed and we were unable to recover it. 00:31:14.178 [2024-11-25 13:05:54.069242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.178 [2024-11-25 13:05:54.069343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.178 [2024-11-25 13:05:54.069359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.178 [2024-11-25 13:05:54.069367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.178 [2024-11-25 13:05:54.069381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.178 [2024-11-25 13:05:54.069396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.178 qpair failed and we were unable to recover it. 00:31:14.442 [2024-11-25 13:05:54.079128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.442 [2024-11-25 13:05:54.079191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.442 [2024-11-25 13:05:54.079206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.442 [2024-11-25 13:05:54.079213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.442 [2024-11-25 13:05:54.079220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.442 [2024-11-25 13:05:54.079234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.442 qpair failed and we were unable to recover it. 00:31:14.442 [2024-11-25 13:05:54.089205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.442 [2024-11-25 13:05:54.089260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.442 [2024-11-25 13:05:54.089274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.442 [2024-11-25 13:05:54.089281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.442 [2024-11-25 13:05:54.089287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.442 [2024-11-25 13:05:54.089301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.442 qpair failed and we were unable to recover it. 00:31:14.442 [2024-11-25 13:05:54.099257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.442 [2024-11-25 13:05:54.099313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.442 [2024-11-25 13:05:54.099328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.442 [2024-11-25 13:05:54.099335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.442 [2024-11-25 13:05:54.099342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.442 [2024-11-25 13:05:54.099356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.442 qpair failed and we were unable to recover it. 00:31:14.442 [2024-11-25 13:05:54.109326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.442 [2024-11-25 13:05:54.109383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.442 [2024-11-25 13:05:54.109397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.442 [2024-11-25 13:05:54.109405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.442 [2024-11-25 13:05:54.109411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.442 [2024-11-25 13:05:54.109425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.442 qpair failed and we were unable to recover it. 00:31:14.442 [2024-11-25 13:05:54.119375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.442 [2024-11-25 13:05:54.119431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.442 [2024-11-25 13:05:54.119444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.442 [2024-11-25 13:05:54.119451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.442 [2024-11-25 13:05:54.119458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.442 [2024-11-25 13:05:54.119471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.442 qpair failed and we were unable to recover it. 00:31:14.442 [2024-11-25 13:05:54.129341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.442 [2024-11-25 13:05:54.129387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.442 [2024-11-25 13:05:54.129401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.442 [2024-11-25 13:05:54.129409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.442 [2024-11-25 13:05:54.129415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.442 [2024-11-25 13:05:54.129429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.442 qpair failed and we were unable to recover it. 00:31:14.442 [2024-11-25 13:05:54.139352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.442 [2024-11-25 13:05:54.139402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.442 [2024-11-25 13:05:54.139415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.442 [2024-11-25 13:05:54.139423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.442 [2024-11-25 13:05:54.139429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.442 [2024-11-25 13:05:54.139443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.442 qpair failed and we were unable to recover it. 00:31:14.442 [2024-11-25 13:05:54.149447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.442 [2024-11-25 13:05:54.149532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.442 [2024-11-25 13:05:54.149545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.442 [2024-11-25 13:05:54.149552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.442 [2024-11-25 13:05:54.149559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.442 [2024-11-25 13:05:54.149572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.442 qpair failed and we were unable to recover it. 00:31:14.442 [2024-11-25 13:05:54.159479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.442 [2024-11-25 13:05:54.159536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.443 [2024-11-25 13:05:54.159554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.443 [2024-11-25 13:05:54.159561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.443 [2024-11-25 13:05:54.159567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.443 [2024-11-25 13:05:54.159581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.443 qpair failed and we were unable to recover it. 00:31:14.443 [2024-11-25 13:05:54.169441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.443 [2024-11-25 13:05:54.169487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.443 [2024-11-25 13:05:54.169501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.443 [2024-11-25 13:05:54.169508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.443 [2024-11-25 13:05:54.169515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.443 [2024-11-25 13:05:54.169528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.443 qpair failed and we were unable to recover it. 00:31:14.443 [2024-11-25 13:05:54.179463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.443 [2024-11-25 13:05:54.179547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.443 [2024-11-25 13:05:54.179560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.443 [2024-11-25 13:05:54.179567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.443 [2024-11-25 13:05:54.179574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.443 [2024-11-25 13:05:54.179587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.443 qpair failed and we were unable to recover it. 00:31:14.443 [2024-11-25 13:05:54.189415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.443 [2024-11-25 13:05:54.189471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.443 [2024-11-25 13:05:54.189485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.443 [2024-11-25 13:05:54.189493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.443 [2024-11-25 13:05:54.189500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.443 [2024-11-25 13:05:54.189514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.443 qpair failed and we were unable to recover it. 00:31:14.443 [2024-11-25 13:05:54.199579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.443 [2024-11-25 13:05:54.199634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.443 [2024-11-25 13:05:54.199648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.443 [2024-11-25 13:05:54.199656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.443 [2024-11-25 13:05:54.199666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.443 [2024-11-25 13:05:54.199680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.443 qpair failed and we were unable to recover it. 00:31:14.443 [2024-11-25 13:05:54.209550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.443 [2024-11-25 13:05:54.209645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.443 [2024-11-25 13:05:54.209671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.443 [2024-11-25 13:05:54.209680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.443 [2024-11-25 13:05:54.209687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.443 [2024-11-25 13:05:54.209707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.443 qpair failed and we were unable to recover it. 00:31:14.443 [2024-11-25 13:05:54.219544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.443 [2024-11-25 13:05:54.219599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.443 [2024-11-25 13:05:54.219624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.443 [2024-11-25 13:05:54.219633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.443 [2024-11-25 13:05:54.219640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.443 [2024-11-25 13:05:54.219660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.443 qpair failed and we were unable to recover it. 00:31:14.443 [2024-11-25 13:05:54.229648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.443 [2024-11-25 13:05:54.229708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.443 [2024-11-25 13:05:54.229733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.443 [2024-11-25 13:05:54.229742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.443 [2024-11-25 13:05:54.229749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.443 [2024-11-25 13:05:54.229768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.443 qpair failed and we were unable to recover it. 00:31:14.443 [2024-11-25 13:05:54.239665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.443 [2024-11-25 13:05:54.239721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.443 [2024-11-25 13:05:54.239736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.443 [2024-11-25 13:05:54.239744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.443 [2024-11-25 13:05:54.239751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.443 [2024-11-25 13:05:54.239766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.443 qpair failed and we were unable to recover it. 00:31:14.443 [2024-11-25 13:05:54.249672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.443 [2024-11-25 13:05:54.249725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.444 [2024-11-25 13:05:54.249740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.444 [2024-11-25 13:05:54.249747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.444 [2024-11-25 13:05:54.249754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.444 [2024-11-25 13:05:54.249768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.444 qpair failed and we were unable to recover it. 00:31:14.444 [2024-11-25 13:05:54.259699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.444 [2024-11-25 13:05:54.259755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.444 [2024-11-25 13:05:54.259770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.444 [2024-11-25 13:05:54.259777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.444 [2024-11-25 13:05:54.259784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.444 [2024-11-25 13:05:54.259798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.444 qpair failed and we were unable to recover it. 00:31:14.444 [2024-11-25 13:05:54.269771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.444 [2024-11-25 13:05:54.269826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.444 [2024-11-25 13:05:54.269839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.444 [2024-11-25 13:05:54.269846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.444 [2024-11-25 13:05:54.269853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.444 [2024-11-25 13:05:54.269870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.444 qpair failed and we were unable to recover it. 00:31:14.444 [2024-11-25 13:05:54.279676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.444 [2024-11-25 13:05:54.279729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.444 [2024-11-25 13:05:54.279744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.444 [2024-11-25 13:05:54.279752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.444 [2024-11-25 13:05:54.279758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.444 [2024-11-25 13:05:54.279773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.444 qpair failed and we were unable to recover it. 00:31:14.444 [2024-11-25 13:05:54.289810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.444 [2024-11-25 13:05:54.289858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.444 [2024-11-25 13:05:54.289883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.444 [2024-11-25 13:05:54.289891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.444 [2024-11-25 13:05:54.289897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.444 [2024-11-25 13:05:54.289912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.444 qpair failed and we were unable to recover it. 00:31:14.444 [2024-11-25 13:05:54.299806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.444 [2024-11-25 13:05:54.299860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.444 [2024-11-25 13:05:54.299878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.444 [2024-11-25 13:05:54.299885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.444 [2024-11-25 13:05:54.299891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.444 [2024-11-25 13:05:54.299906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.444 qpair failed and we were unable to recover it. 00:31:14.444 [2024-11-25 13:05:54.309888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.444 [2024-11-25 13:05:54.309943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.444 [2024-11-25 13:05:54.309956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.444 [2024-11-25 13:05:54.309964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.444 [2024-11-25 13:05:54.309970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.444 [2024-11-25 13:05:54.309984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.444 qpair failed and we were unable to recover it. 00:31:14.444 [2024-11-25 13:05:54.319896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.444 [2024-11-25 13:05:54.319950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.444 [2024-11-25 13:05:54.319964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.444 [2024-11-25 13:05:54.319971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.444 [2024-11-25 13:05:54.319978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.444 [2024-11-25 13:05:54.319991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.444 qpair failed and we were unable to recover it. 00:31:14.444 [2024-11-25 13:05:54.329881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.444 [2024-11-25 13:05:54.329929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.444 [2024-11-25 13:05:54.329943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.444 [2024-11-25 13:05:54.329950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.444 [2024-11-25 13:05:54.329960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.444 [2024-11-25 13:05:54.329974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.444 qpair failed and we were unable to recover it. 00:31:14.444 [2024-11-25 13:05:54.339781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.444 [2024-11-25 13:05:54.339834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.444 [2024-11-25 13:05:54.339847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.444 [2024-11-25 13:05:54.339855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.444 [2024-11-25 13:05:54.339865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.444 [2024-11-25 13:05:54.339879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.444 qpair failed and we were unable to recover it. 00:31:14.706 [2024-11-25 13:05:54.349966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.706 [2024-11-25 13:05:54.350055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.706 [2024-11-25 13:05:54.350069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.706 [2024-11-25 13:05:54.350077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.706 [2024-11-25 13:05:54.350084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.706 [2024-11-25 13:05:54.350098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.706 qpair failed and we were unable to recover it. 00:31:14.706 [2024-11-25 13:05:54.359980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.706 [2024-11-25 13:05:54.360034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.706 [2024-11-25 13:05:54.360047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.706 [2024-11-25 13:05:54.360055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.706 [2024-11-25 13:05:54.360061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.706 [2024-11-25 13:05:54.360075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.706 qpair failed and we were unable to recover it. 00:31:14.706 [2024-11-25 13:05:54.370004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.706 [2024-11-25 13:05:54.370052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.706 [2024-11-25 13:05:54.370065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.706 [2024-11-25 13:05:54.370073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.706 [2024-11-25 13:05:54.370079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.706 [2024-11-25 13:05:54.370094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.706 qpair failed and we were unable to recover it. 00:31:14.706 [2024-11-25 13:05:54.380004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.706 [2024-11-25 13:05:54.380057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.706 [2024-11-25 13:05:54.380070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.706 [2024-11-25 13:05:54.380078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.706 [2024-11-25 13:05:54.380084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.706 [2024-11-25 13:05:54.380098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.706 qpair failed and we were unable to recover it. 00:31:14.706 [2024-11-25 13:05:54.390084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.706 [2024-11-25 13:05:54.390137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.706 [2024-11-25 13:05:54.390150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.706 [2024-11-25 13:05:54.390158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.706 [2024-11-25 13:05:54.390164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.706 [2024-11-25 13:05:54.390178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.706 qpair failed and we were unable to recover it. 00:31:14.706 [2024-11-25 13:05:54.400053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.706 [2024-11-25 13:05:54.400106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.706 [2024-11-25 13:05:54.400119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.706 [2024-11-25 13:05:54.400127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.706 [2024-11-25 13:05:54.400133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.706 [2024-11-25 13:05:54.400147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.706 qpair failed and we were unable to recover it. 00:31:14.706 [2024-11-25 13:05:54.409975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.706 [2024-11-25 13:05:54.410023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.706 [2024-11-25 13:05:54.410036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.706 [2024-11-25 13:05:54.410044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.410050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.410064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.420113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.420161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.420178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.707 [2024-11-25 13:05:54.420185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.420192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.420205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.430201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.430254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.430267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.707 [2024-11-25 13:05:54.430274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.430281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.430294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.440252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.440306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.440319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.707 [2024-11-25 13:05:54.440328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.440335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.440349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.450223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.450275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.450288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.707 [2024-11-25 13:05:54.450296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.450302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.450316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.460256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.460301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.460314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.707 [2024-11-25 13:05:54.460322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.460332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.460346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.470314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.470376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.470390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.707 [2024-11-25 13:05:54.470398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.470404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.470418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.480338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.480403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.480416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.707 [2024-11-25 13:05:54.480423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.480430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.480443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.490317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.490365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.490379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.707 [2024-11-25 13:05:54.490387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.490395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.490408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.500359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.500409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.500422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.707 [2024-11-25 13:05:54.500430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.500436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.500449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.510418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.510472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.510485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.707 [2024-11-25 13:05:54.510493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.510499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.510513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.520402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.520455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.520468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.707 [2024-11-25 13:05:54.520476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.707 [2024-11-25 13:05:54.520482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.707 [2024-11-25 13:05:54.520495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.707 qpair failed and we were unable to recover it. 00:31:14.707 [2024-11-25 13:05:54.530440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.707 [2024-11-25 13:05:54.530490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.707 [2024-11-25 13:05:54.530503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.708 [2024-11-25 13:05:54.530510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.708 [2024-11-25 13:05:54.530517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.708 [2024-11-25 13:05:54.530530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.708 qpair failed and we were unable to recover it. 00:31:14.708 [2024-11-25 13:05:54.540466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.708 [2024-11-25 13:05:54.540520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.708 [2024-11-25 13:05:54.540533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.708 [2024-11-25 13:05:54.540541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.708 [2024-11-25 13:05:54.540547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.708 [2024-11-25 13:05:54.540561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.708 qpair failed and we were unable to recover it. 00:31:14.708 [2024-11-25 13:05:54.550561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.708 [2024-11-25 13:05:54.550629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.708 [2024-11-25 13:05:54.550645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.708 [2024-11-25 13:05:54.550652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.708 [2024-11-25 13:05:54.550658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.708 [2024-11-25 13:05:54.550672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.708 qpair failed and we were unable to recover it. 00:31:14.708 [2024-11-25 13:05:54.560559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.708 [2024-11-25 13:05:54.560622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.708 [2024-11-25 13:05:54.560648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.708 [2024-11-25 13:05:54.560657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.708 [2024-11-25 13:05:54.560664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.708 [2024-11-25 13:05:54.560683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.708 qpair failed and we were unable to recover it. 00:31:14.708 [2024-11-25 13:05:54.570432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.708 [2024-11-25 13:05:54.570487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.708 [2024-11-25 13:05:54.570506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.708 [2024-11-25 13:05:54.570514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.708 [2024-11-25 13:05:54.570521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.708 [2024-11-25 13:05:54.570537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.708 qpair failed and we were unable to recover it. 00:31:14.708 [2024-11-25 13:05:54.580566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.708 [2024-11-25 13:05:54.580638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.708 [2024-11-25 13:05:54.580654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.708 [2024-11-25 13:05:54.580661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.708 [2024-11-25 13:05:54.580670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.708 [2024-11-25 13:05:54.580685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.708 qpair failed and we were unable to recover it. 00:31:14.708 [2024-11-25 13:05:54.590630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.708 [2024-11-25 13:05:54.590688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.708 [2024-11-25 13:05:54.590704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.708 [2024-11-25 13:05:54.590712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.708 [2024-11-25 13:05:54.590722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.708 [2024-11-25 13:05:54.590737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.708 qpair failed and we were unable to recover it. 00:31:14.708 [2024-11-25 13:05:54.600676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.708 [2024-11-25 13:05:54.600735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.708 [2024-11-25 13:05:54.600749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.708 [2024-11-25 13:05:54.600756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.708 [2024-11-25 13:05:54.600762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.708 [2024-11-25 13:05:54.600776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.708 qpair failed and we were unable to recover it. 00:31:14.970 [2024-11-25 13:05:54.610647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.970 [2024-11-25 13:05:54.610752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.970 [2024-11-25 13:05:54.610765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.970 [2024-11-25 13:05:54.610773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.970 [2024-11-25 13:05:54.610779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.970 [2024-11-25 13:05:54.610794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.970 qpair failed and we were unable to recover it. 00:31:14.970 [2024-11-25 13:05:54.620670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.970 [2024-11-25 13:05:54.620716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.970 [2024-11-25 13:05:54.620730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.970 [2024-11-25 13:05:54.620738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.970 [2024-11-25 13:05:54.620744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.970 [2024-11-25 13:05:54.620758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.970 qpair failed and we were unable to recover it. 00:31:14.970 [2024-11-25 13:05:54.630733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.970 [2024-11-25 13:05:54.630801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.970 [2024-11-25 13:05:54.630815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.970 [2024-11-25 13:05:54.630822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.970 [2024-11-25 13:05:54.630829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.970 [2024-11-25 13:05:54.630842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.970 qpair failed and we were unable to recover it. 00:31:14.970 [2024-11-25 13:05:54.640773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.970 [2024-11-25 13:05:54.640829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.970 [2024-11-25 13:05:54.640842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.970 [2024-11-25 13:05:54.640849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.970 [2024-11-25 13:05:54.640856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.970 [2024-11-25 13:05:54.640875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.970 qpair failed and we were unable to recover it. 00:31:14.970 [2024-11-25 13:05:54.650759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.970 [2024-11-25 13:05:54.650812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.970 [2024-11-25 13:05:54.650825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.970 [2024-11-25 13:05:54.650833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.970 [2024-11-25 13:05:54.650840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.970 [2024-11-25 13:05:54.650853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.970 qpair failed and we were unable to recover it. 00:31:14.970 [2024-11-25 13:05:54.660756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.970 [2024-11-25 13:05:54.660800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.971 [2024-11-25 13:05:54.660813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.971 [2024-11-25 13:05:54.660820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.971 [2024-11-25 13:05:54.660827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.971 [2024-11-25 13:05:54.660840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.971 qpair failed and we were unable to recover it. 00:31:14.971 [2024-11-25 13:05:54.670826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.971 [2024-11-25 13:05:54.670883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.971 [2024-11-25 13:05:54.670897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.971 [2024-11-25 13:05:54.670904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.971 [2024-11-25 13:05:54.670911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.971 [2024-11-25 13:05:54.670924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.971 qpair failed and we were unable to recover it. 00:31:14.971 [2024-11-25 13:05:54.680875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.971 [2024-11-25 13:05:54.680927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.971 [2024-11-25 13:05:54.680944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.971 [2024-11-25 13:05:54.680952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.971 [2024-11-25 13:05:54.680959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.971 [2024-11-25 13:05:54.680973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.971 qpair failed and we were unable to recover it. 00:31:14.971 [2024-11-25 13:05:54.690809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.971 [2024-11-25 13:05:54.690866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.971 [2024-11-25 13:05:54.690880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.971 [2024-11-25 13:05:54.690887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.971 [2024-11-25 13:05:54.690893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.971 [2024-11-25 13:05:54.690907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.971 qpair failed and we were unable to recover it. 00:31:14.971 [2024-11-25 13:05:54.700879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.971 [2024-11-25 13:05:54.700981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.971 [2024-11-25 13:05:54.700995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.971 [2024-11-25 13:05:54.701002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.971 [2024-11-25 13:05:54.701010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.971 [2024-11-25 13:05:54.701024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.971 qpair failed and we were unable to recover it. 00:31:14.971 [2024-11-25 13:05:54.710994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.971 [2024-11-25 13:05:54.711057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.971 [2024-11-25 13:05:54.711070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.971 [2024-11-25 13:05:54.711077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.971 [2024-11-25 13:05:54.711084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.971 [2024-11-25 13:05:54.711097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.971 qpair failed and we were unable to recover it. 00:31:14.971 [2024-11-25 13:05:54.720993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.971 [2024-11-25 13:05:54.721049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.971 [2024-11-25 13:05:54.721063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.971 [2024-11-25 13:05:54.721071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.971 [2024-11-25 13:05:54.721081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.971 [2024-11-25 13:05:54.721095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.971 qpair failed and we were unable to recover it. 00:31:14.971 [2024-11-25 13:05:54.730956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.971 [2024-11-25 13:05:54.731009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.971 [2024-11-25 13:05:54.731023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.971 [2024-11-25 13:05:54.731030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.971 [2024-11-25 13:05:54.731036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.971 [2024-11-25 13:05:54.731050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.971 qpair failed and we were unable to recover it. 00:31:14.971 [2024-11-25 13:05:54.740969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.971 [2024-11-25 13:05:54.741031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.971 [2024-11-25 13:05:54.741045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.971 [2024-11-25 13:05:54.741052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.971 [2024-11-25 13:05:54.741059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.971 [2024-11-25 13:05:54.741072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.971 qpair failed and we were unable to recover it. 00:31:14.971 [2024-11-25 13:05:54.751041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.971 [2024-11-25 13:05:54.751099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.971 [2024-11-25 13:05:54.751113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.971 [2024-11-25 13:05:54.751120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.971 [2024-11-25 13:05:54.751127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.971 [2024-11-25 13:05:54.751141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.971 qpair failed and we were unable to recover it. 00:31:14.971 [2024-11-25 13:05:54.761099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.971 [2024-11-25 13:05:54.761153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.971 [2024-11-25 13:05:54.761166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.971 [2024-11-25 13:05:54.761173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.972 [2024-11-25 13:05:54.761180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.972 [2024-11-25 13:05:54.761194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.972 qpair failed and we were unable to recover it. 00:31:14.972 [2024-11-25 13:05:54.771074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.972 [2024-11-25 13:05:54.771124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.972 [2024-11-25 13:05:54.771137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.972 [2024-11-25 13:05:54.771144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.972 [2024-11-25 13:05:54.771151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.972 [2024-11-25 13:05:54.771165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.972 qpair failed and we were unable to recover it. 00:31:14.972 [2024-11-25 13:05:54.781059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.972 [2024-11-25 13:05:54.781112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.972 [2024-11-25 13:05:54.781126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.972 [2024-11-25 13:05:54.781134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.972 [2024-11-25 13:05:54.781141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.972 [2024-11-25 13:05:54.781155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.972 qpair failed and we were unable to recover it. 00:31:14.972 [2024-11-25 13:05:54.791101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.972 [2024-11-25 13:05:54.791150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.972 [2024-11-25 13:05:54.791163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.972 [2024-11-25 13:05:54.791171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.972 [2024-11-25 13:05:54.791177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.972 [2024-11-25 13:05:54.791191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.972 qpair failed and we were unable to recover it. 00:31:14.972 [2024-11-25 13:05:54.801048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.972 [2024-11-25 13:05:54.801098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.972 [2024-11-25 13:05:54.801112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.972 [2024-11-25 13:05:54.801119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.972 [2024-11-25 13:05:54.801125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.972 [2024-11-25 13:05:54.801140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.972 qpair failed and we were unable to recover it. 00:31:14.972 [2024-11-25 13:05:54.811029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.972 [2024-11-25 13:05:54.811074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.972 [2024-11-25 13:05:54.811090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.972 [2024-11-25 13:05:54.811097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.972 [2024-11-25 13:05:54.811104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.972 [2024-11-25 13:05:54.811118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.972 qpair failed and we were unable to recover it. 00:31:14.972 [2024-11-25 13:05:54.821196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.972 [2024-11-25 13:05:54.821243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.972 [2024-11-25 13:05:54.821257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.972 [2024-11-25 13:05:54.821265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.972 [2024-11-25 13:05:54.821271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.972 [2024-11-25 13:05:54.821285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.972 qpair failed and we were unable to recover it. 00:31:14.972 [2024-11-25 13:05:54.831212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.972 [2024-11-25 13:05:54.831291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.972 [2024-11-25 13:05:54.831305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.972 [2024-11-25 13:05:54.831312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.972 [2024-11-25 13:05:54.831319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.972 [2024-11-25 13:05:54.831333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.972 qpair failed and we were unable to recover it. 00:31:14.972 [2024-11-25 13:05:54.841285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.972 [2024-11-25 13:05:54.841334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.972 [2024-11-25 13:05:54.841347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.972 [2024-11-25 13:05:54.841354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.972 [2024-11-25 13:05:54.841360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.972 [2024-11-25 13:05:54.841374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.972 qpair failed and we were unable to recover it. 00:31:14.972 [2024-11-25 13:05:54.851186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.972 [2024-11-25 13:05:54.851231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.972 [2024-11-25 13:05:54.851246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.972 [2024-11-25 13:05:54.851253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.972 [2024-11-25 13:05:54.851263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.972 [2024-11-25 13:05:54.851278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.972 qpair failed and we were unable to recover it. 00:31:14.972 [2024-11-25 13:05:54.861295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.972 [2024-11-25 13:05:54.861338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.973 [2024-11-25 13:05:54.861353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.973 [2024-11-25 13:05:54.861360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.973 [2024-11-25 13:05:54.861367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:14.973 [2024-11-25 13:05:54.861381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:14.973 qpair failed and we were unable to recover it. 00:31:15.234 [2024-11-25 13:05:54.871307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.234 [2024-11-25 13:05:54.871355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.234 [2024-11-25 13:05:54.871369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.234 [2024-11-25 13:05:54.871377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.234 [2024-11-25 13:05:54.871383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.234 [2024-11-25 13:05:54.871397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.234 qpair failed and we were unable to recover it. 00:31:15.234 [2024-11-25 13:05:54.881406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.234 [2024-11-25 13:05:54.881461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.234 [2024-11-25 13:05:54.881475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.234 [2024-11-25 13:05:54.881482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.234 [2024-11-25 13:05:54.881489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.234 [2024-11-25 13:05:54.881502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.234 qpair failed and we were unable to recover it. 00:31:15.234 [2024-11-25 13:05:54.891410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.234 [2024-11-25 13:05:54.891453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.234 [2024-11-25 13:05:54.891467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.234 [2024-11-25 13:05:54.891475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.234 [2024-11-25 13:05:54.891481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.234 [2024-11-25 13:05:54.891495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.234 qpair failed and we were unable to recover it. 00:31:15.234 [2024-11-25 13:05:54.901415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.234 [2024-11-25 13:05:54.901462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.234 [2024-11-25 13:05:54.901476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.234 [2024-11-25 13:05:54.901484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.234 [2024-11-25 13:05:54.901490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.234 [2024-11-25 13:05:54.901504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.234 qpair failed and we were unable to recover it. 00:31:15.234 [2024-11-25 13:05:54.911443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.235 [2024-11-25 13:05:54.911491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.235 [2024-11-25 13:05:54.911504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.235 [2024-11-25 13:05:54.911511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.235 [2024-11-25 13:05:54.911518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.235 [2024-11-25 13:05:54.911531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.235 qpair failed and we were unable to recover it. 00:31:15.235 [2024-11-25 13:05:54.921477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.235 [2024-11-25 13:05:54.921529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.235 [2024-11-25 13:05:54.921543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.235 [2024-11-25 13:05:54.921550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.235 [2024-11-25 13:05:54.921557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.235 [2024-11-25 13:05:54.921571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.235 qpair failed and we were unable to recover it. 00:31:15.235 [2024-11-25 13:05:54.931523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.235 [2024-11-25 13:05:54.931570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.235 [2024-11-25 13:05:54.931584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.235 [2024-11-25 13:05:54.931593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.235 [2024-11-25 13:05:54.931601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.235 [2024-11-25 13:05:54.931617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.235 qpair failed and we were unable to recover it. 00:31:15.235 [2024-11-25 13:05:54.941525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.235 [2024-11-25 13:05:54.941570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.235 [2024-11-25 13:05:54.941588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.235 [2024-11-25 13:05:54.941596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.235 [2024-11-25 13:05:54.941603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.235 [2024-11-25 13:05:54.941618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.235 qpair failed and we were unable to recover it. 00:31:15.235 [2024-11-25 13:05:54.951541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.235 [2024-11-25 13:05:54.951594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.235 [2024-11-25 13:05:54.951619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.235 [2024-11-25 13:05:54.951628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.235 [2024-11-25 13:05:54.951635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.235 [2024-11-25 13:05:54.951655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.235 qpair failed and we were unable to recover it. 00:31:15.235 [2024-11-25 13:05:54.961641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.235 [2024-11-25 13:05:54.961700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.235 [2024-11-25 13:05:54.961726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.235 [2024-11-25 13:05:54.961735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.235 [2024-11-25 13:05:54.961742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.235 [2024-11-25 13:05:54.961763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.235 qpair failed and we were unable to recover it. 00:31:15.235 [2024-11-25 13:05:54.971606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.235 [2024-11-25 13:05:54.971698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.235 [2024-11-25 13:05:54.971715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.235 [2024-11-25 13:05:54.971723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.235 [2024-11-25 13:05:54.971730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.235 [2024-11-25 13:05:54.971745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.235 qpair failed and we were unable to recover it. 00:31:15.235 [2024-11-25 13:05:54.981636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.235 [2024-11-25 13:05:54.981685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.235 [2024-11-25 13:05:54.981699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.235 [2024-11-25 13:05:54.981707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.235 [2024-11-25 13:05:54.981718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.235 [2024-11-25 13:05:54.981733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.235 qpair failed and we were unable to recover it. 00:31:15.235 [2024-11-25 13:05:54.991569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.235 [2024-11-25 13:05:54.991614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.235 [2024-11-25 13:05:54.991629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.235 [2024-11-25 13:05:54.991636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.235 [2024-11-25 13:05:54.991643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.235 [2024-11-25 13:05:54.991657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.235 qpair failed and we were unable to recover it. 00:31:15.235 [2024-11-25 13:05:55.001712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.235 [2024-11-25 13:05:55.001813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.235 [2024-11-25 13:05:55.001828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.235 [2024-11-25 13:05:55.001836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.235 [2024-11-25 13:05:55.001842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.235 [2024-11-25 13:05:55.001857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.235 qpair failed and we were unable to recover it. 00:31:15.235 [2024-11-25 13:05:55.011746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.235 [2024-11-25 13:05:55.011816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.235 [2024-11-25 13:05:55.011831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.236 [2024-11-25 13:05:55.011838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.236 [2024-11-25 13:05:55.011845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.236 [2024-11-25 13:05:55.011869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.236 qpair failed and we were unable to recover it. 00:31:15.236 [2024-11-25 13:05:55.021734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.236 [2024-11-25 13:05:55.021782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.236 [2024-11-25 13:05:55.021796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.236 [2024-11-25 13:05:55.021804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.236 [2024-11-25 13:05:55.021810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.236 [2024-11-25 13:05:55.021825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.236 qpair failed and we were unable to recover it. 00:31:15.236 [2024-11-25 13:05:55.031758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.236 [2024-11-25 13:05:55.031806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.236 [2024-11-25 13:05:55.031820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.236 [2024-11-25 13:05:55.031827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.236 [2024-11-25 13:05:55.031834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.236 [2024-11-25 13:05:55.031848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.236 qpair failed and we were unable to recover it. 00:31:15.236 [2024-11-25 13:05:55.041806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.236 [2024-11-25 13:05:55.041868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.236 [2024-11-25 13:05:55.041882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.236 [2024-11-25 13:05:55.041890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.236 [2024-11-25 13:05:55.041896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.236 [2024-11-25 13:05:55.041910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.236 qpair failed and we were unable to recover it. 00:31:15.236 [2024-11-25 13:05:55.051818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.236 [2024-11-25 13:05:55.051870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.236 [2024-11-25 13:05:55.051883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.236 [2024-11-25 13:05:55.051891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.236 [2024-11-25 13:05:55.051898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xda8490 00:31:15.236 [2024-11-25 13:05:55.051912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:15.236 qpair failed and we were unable to recover it. 00:31:15.236 [2024-11-25 13:05:55.052055] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:31:15.236 A controller has encountered a failure and is being reset. 00:31:15.236 [2024-11-25 13:05:55.052097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda5020 (9): Bad file descriptor 00:31:15.236 Controller properly reset. 00:31:15.498 Initializing NVMe Controllers 00:31:15.498 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:15.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:15.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:15.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:15.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:15.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:15.498 Initialization complete. Launching workers. 00:31:15.498 Starting thread on core 1 00:31:15.498 Starting thread on core 2 00:31:15.498 Starting thread on core 3 00:31:15.498 Starting thread on core 0 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:15.498 00:31:15.498 real 0m11.507s 00:31:15.498 user 0m21.925s 00:31:15.498 sys 0m3.720s 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:15.498 ************************************ 00:31:15.498 END TEST nvmf_target_disconnect_tc2 00:31:15.498 ************************************ 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:15.498 rmmod nvme_tcp 00:31:15.498 rmmod nvme_fabrics 00:31:15.498 rmmod nvme_keyring 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 837486 ']' 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 837486 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 837486 ']' 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 837486 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 837486 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 837486' 00:31:15.498 killing process with pid 837486 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 837486 00:31:15.498 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 837486 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.759 13:05:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.672 13:05:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.672 00:31:17.672 real 0m22.961s 00:31:17.672 user 0m50.216s 00:31:17.672 sys 0m10.793s 00:31:17.672 13:05:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.672 13:05:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:17.672 ************************************ 00:31:17.672 END TEST nvmf_target_disconnect 00:31:17.672 ************************************ 00:31:17.932 13:05:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:17.932 00:31:17.932 real 6m48.209s 00:31:17.932 user 11m29.714s 00:31:17.932 sys 2m24.650s 00:31:17.932 13:05:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.932 13:05:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.932 ************************************ 00:31:17.932 END TEST nvmf_host 00:31:17.932 ************************************ 00:31:17.932 13:05:57 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:17.933 13:05:57 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:17.933 13:05:57 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:17.933 13:05:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:17.933 13:05:57 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.933 13:05:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:17.933 ************************************ 00:31:17.933 START TEST nvmf_target_core_interrupt_mode 00:31:17.933 ************************************ 00:31:17.933 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:17.933 * Looking for test storage... 00:31:17.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:17.933 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:17.933 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:31:17.933 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:18.194 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:18.194 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:18.194 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:18.194 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:18.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.195 --rc genhtml_branch_coverage=1 00:31:18.195 --rc genhtml_function_coverage=1 00:31:18.195 --rc genhtml_legend=1 00:31:18.195 --rc geninfo_all_blocks=1 00:31:18.195 --rc geninfo_unexecuted_blocks=1 00:31:18.195 00:31:18.195 ' 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:18.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.195 --rc genhtml_branch_coverage=1 00:31:18.195 --rc genhtml_function_coverage=1 00:31:18.195 --rc genhtml_legend=1 00:31:18.195 --rc geninfo_all_blocks=1 00:31:18.195 --rc geninfo_unexecuted_blocks=1 00:31:18.195 00:31:18.195 ' 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:18.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.195 --rc genhtml_branch_coverage=1 00:31:18.195 --rc genhtml_function_coverage=1 00:31:18.195 --rc genhtml_legend=1 00:31:18.195 --rc geninfo_all_blocks=1 00:31:18.195 --rc geninfo_unexecuted_blocks=1 00:31:18.195 00:31:18.195 ' 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:18.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.195 --rc genhtml_branch_coverage=1 00:31:18.195 --rc genhtml_function_coverage=1 00:31:18.195 --rc genhtml_legend=1 00:31:18.195 --rc geninfo_all_blocks=1 00:31:18.195 --rc geninfo_unexecuted_blocks=1 00:31:18.195 00:31:18.195 ' 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.195 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:18.196 ************************************ 00:31:18.196 START TEST nvmf_abort 00:31:18.196 ************************************ 00:31:18.196 13:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:18.196 * Looking for test storage... 00:31:18.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:18.196 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:18.196 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:31:18.196 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:18.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.459 --rc genhtml_branch_coverage=1 00:31:18.459 --rc genhtml_function_coverage=1 00:31:18.459 --rc genhtml_legend=1 00:31:18.459 --rc geninfo_all_blocks=1 00:31:18.459 --rc geninfo_unexecuted_blocks=1 00:31:18.459 00:31:18.459 ' 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:18.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.459 --rc genhtml_branch_coverage=1 00:31:18.459 --rc genhtml_function_coverage=1 00:31:18.459 --rc genhtml_legend=1 00:31:18.459 --rc geninfo_all_blocks=1 00:31:18.459 --rc geninfo_unexecuted_blocks=1 00:31:18.459 00:31:18.459 ' 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:18.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.459 --rc genhtml_branch_coverage=1 00:31:18.459 --rc genhtml_function_coverage=1 00:31:18.459 --rc genhtml_legend=1 00:31:18.459 --rc geninfo_all_blocks=1 00:31:18.459 --rc geninfo_unexecuted_blocks=1 00:31:18.459 00:31:18.459 ' 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:18.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.459 --rc genhtml_branch_coverage=1 00:31:18.459 --rc genhtml_function_coverage=1 00:31:18.459 --rc genhtml_legend=1 00:31:18.459 --rc geninfo_all_blocks=1 00:31:18.459 --rc geninfo_unexecuted_blocks=1 00:31:18.459 00:31:18.459 ' 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.459 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:18.460 13:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:26.620 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:26.620 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.620 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:26.621 Found net devices under 0000:31:00.0: cvl_0_0 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:26.621 Found net devices under 0000:31:00.1: cvl_0_1 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.621 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:26.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:31:26.882 00:31:26.882 --- 10.0.0.2 ping statistics --- 00:31:26.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.882 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:31:26.882 00:31:26.882 --- 10.0.0.1 ping statistics --- 00:31:26.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.882 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:26.882 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:27.142 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:27.142 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:27.142 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.142 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.142 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=843708 00:31:27.143 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 843708 00:31:27.143 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:27.143 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 843708 ']' 00:31:27.143 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.143 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.143 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.143 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.143 13:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.143 [2024-11-25 13:06:06.856009] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:27.143 [2024-11-25 13:06:06.857097] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:31:27.143 [2024-11-25 13:06:06.857144] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.143 [2024-11-25 13:06:06.959543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:27.143 [2024-11-25 13:06:06.994533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.143 [2024-11-25 13:06:06.994565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.143 [2024-11-25 13:06:06.994573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.143 [2024-11-25 13:06:06.994580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.143 [2024-11-25 13:06:06.994586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.143 [2024-11-25 13:06:06.995947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:27.143 [2024-11-25 13:06:06.996108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:27.143 [2024-11-25 13:06:06.996109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.403 [2024-11-25 13:06:07.050940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:27.403 [2024-11-25 13:06:07.050994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:27.403 [2024-11-25 13:06:07.051516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:27.403 [2024-11-25 13:06:07.051852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.403 [2024-11-25 13:06:07.121013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.403 Malloc0 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.403 Delay0 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.403 [2024-11-25 13:06:07.212984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:27.403 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.404 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.404 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.404 13:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:27.676 [2024-11-25 13:06:07.338566] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:30.235 Initializing NVMe Controllers 00:31:30.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:30.236 controller IO queue size 128 less than required 00:31:30.236 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:30.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:30.236 Initialization complete. Launching workers. 00:31:30.236 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29096 00:31:30.236 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29153, failed to submit 66 00:31:30.236 success 29096, unsuccessful 57, failed 0 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.236 rmmod nvme_tcp 00:31:30.236 rmmod nvme_fabrics 00:31:30.236 rmmod nvme_keyring 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 843708 ']' 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 843708 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 843708 ']' 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 843708 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 843708 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 843708' 00:31:30.236 killing process with pid 843708 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 843708 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 843708 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.236 13:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.151 13:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:32.151 00:31:32.151 real 0m13.985s 00:31:32.151 user 0m11.469s 00:31:32.151 sys 0m7.771s 00:31:32.151 13:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:32.151 13:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:32.151 ************************************ 00:31:32.151 END TEST nvmf_abort 00:31:32.151 ************************************ 00:31:32.151 13:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:32.151 13:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:32.151 13:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:32.151 13:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:32.151 ************************************ 00:31:32.151 START TEST nvmf_ns_hotplug_stress 00:31:32.151 ************************************ 00:31:32.151 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:32.413 * Looking for test storage... 00:31:32.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:32.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.413 --rc genhtml_branch_coverage=1 00:31:32.413 --rc genhtml_function_coverage=1 00:31:32.413 --rc genhtml_legend=1 00:31:32.413 --rc geninfo_all_blocks=1 00:31:32.413 --rc geninfo_unexecuted_blocks=1 00:31:32.413 00:31:32.413 ' 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:32.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.413 --rc genhtml_branch_coverage=1 00:31:32.413 --rc genhtml_function_coverage=1 00:31:32.413 --rc genhtml_legend=1 00:31:32.413 --rc geninfo_all_blocks=1 00:31:32.413 --rc geninfo_unexecuted_blocks=1 00:31:32.413 00:31:32.413 ' 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:32.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.413 --rc genhtml_branch_coverage=1 00:31:32.413 --rc genhtml_function_coverage=1 00:31:32.413 --rc genhtml_legend=1 00:31:32.413 --rc geninfo_all_blocks=1 00:31:32.413 --rc geninfo_unexecuted_blocks=1 00:31:32.413 00:31:32.413 ' 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:32.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.413 --rc genhtml_branch_coverage=1 00:31:32.413 --rc genhtml_function_coverage=1 00:31:32.413 --rc genhtml_legend=1 00:31:32.413 --rc geninfo_all_blocks=1 00:31:32.413 --rc geninfo_unexecuted_blocks=1 00:31:32.413 00:31:32.413 ' 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.413 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:32.414 13:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.556 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:40.557 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:40.557 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:40.557 Found net devices under 0000:31:00.0: cvl_0_0 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:40.557 Found net devices under 0000:31:00.1: cvl_0_1 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.557 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.817 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.817 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.817 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:40.817 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.817 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.817 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.817 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:40.817 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:40.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:31:40.817 00:31:40.817 --- 10.0.0.2 ping statistics --- 00:31:40.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.817 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:31:40.817 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:31:40.817 00:31:40.817 --- 10.0.0.1 ping statistics --- 00:31:40.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.817 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:31:40.817 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.818 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:40.818 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:40.818 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.818 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:40.818 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:40.818 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.818 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:40.818 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=849505 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 849505 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 849505 ']' 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:41.078 13:06:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:41.078 [2024-11-25 13:06:20.782387] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:41.078 [2024-11-25 13:06:20.783502] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:31:41.078 [2024-11-25 13:06:20.783552] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.078 [2024-11-25 13:06:20.892933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:41.078 [2024-11-25 13:06:20.944523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:41.078 [2024-11-25 13:06:20.944572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:41.078 [2024-11-25 13:06:20.944581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:41.078 [2024-11-25 13:06:20.944588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:41.078 [2024-11-25 13:06:20.944594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:41.078 [2024-11-25 13:06:20.946549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:41.078 [2024-11-25 13:06:20.946717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.078 [2024-11-25 13:06:20.946719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:41.338 [2024-11-25 13:06:21.022323] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:41.338 [2024-11-25 13:06:21.022398] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:41.338 [2024-11-25 13:06:21.022984] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:41.338 [2024-11-25 13:06:21.023284] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:41.908 13:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:41.908 13:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:41.908 13:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:41.908 13:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:41.908 13:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:41.908 13:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.908 13:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:41.908 13:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:42.169 [2024-11-25 13:06:21.827604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.169 13:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:42.169 13:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.429 [2024-11-25 13:06:22.176209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.429 13:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:42.690 13:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:42.690 Malloc0 00:31:42.690 13:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:42.950 Delay0 00:31:42.950 13:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.210 13:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:43.210 NULL1 00:31:43.210 13:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:43.470 13:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=849879 00:31:43.470 13:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:43.470 13:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:43.470 13:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.730 13:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.991 13:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:43.991 13:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:43.991 true 00:31:43.991 13:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:43.991 13:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.251 13:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.511 13:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:44.511 13:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:44.511 true 00:31:44.511 13:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:44.511 13:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.770 13:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.030 13:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:45.030 13:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:45.291 true 00:31:45.291 13:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:45.291 13:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.291 13:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.552 13:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:45.552 13:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:45.815 true 00:31:45.815 13:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:45.815 13:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.077 13:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.077 13:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:46.077 13:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:46.337 true 00:31:46.337 13:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:46.337 13:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.598 13:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.858 13:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:46.858 13:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:46.858 true 00:31:46.858 13:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:46.858 13:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.119 13:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.380 13:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:47.380 13:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:47.380 true 00:31:47.380 13:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:47.380 13:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.641 13:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.903 13:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:47.903 13:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:47.903 true 00:31:47.903 13:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:47.903 13:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.163 13:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.424 13:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:48.424 13:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:48.424 true 00:31:48.424 13:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:48.424 13:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.715 13:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.976 13:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:48.976 13:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:48.976 true 00:31:48.976 13:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:48.976 13:06:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.236 13:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.496 13:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:49.496 13:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:49.496 true 00:31:49.757 13:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:49.757 13:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.757 13:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.018 13:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:50.018 13:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:50.279 true 00:31:50.279 13:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:50.279 13:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.541 13:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.541 13:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:50.541 13:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:50.802 true 00:31:50.803 13:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:50.803 13:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.063 13:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.063 13:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:51.063 13:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:51.324 true 00:31:51.324 13:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:51.324 13:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.585 13:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.847 13:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:51.847 13:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:51.847 true 00:31:51.847 13:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:51.847 13:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:52.109 13:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.371 13:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:52.371 13:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:52.371 true 00:31:52.371 13:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:52.371 13:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:52.632 13:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.893 13:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:52.893 13:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:52.893 true 00:31:53.155 13:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:53.155 13:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.155 13:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.416 13:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:53.416 13:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:53.675 true 00:31:53.675 13:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:53.675 13:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.675 13:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.936 13:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:53.936 13:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:54.197 true 00:31:54.197 13:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:54.197 13:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.197 13:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.458 13:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:54.458 13:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:54.719 true 00:31:54.720 13:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:54.720 13:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.981 13:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.981 13:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:54.981 13:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:55.242 true 00:31:55.242 13:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:55.242 13:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.502 13:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:55.502 13:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:55.502 13:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:55.764 true 00:31:55.764 13:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:55.764 13:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.024 13:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:56.286 13:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:56.286 13:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:56.286 true 00:31:56.286 13:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:56.286 13:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.572 13:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:56.861 13:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:56.861 13:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:56.861 true 00:31:56.861 13:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:56.861 13:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.203 13:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.203 13:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:57.203 13:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:57.465 true 00:31:57.465 13:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:57.465 13:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.726 13:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.726 13:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:57.726 13:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:57.988 true 00:31:57.988 13:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:57.988 13:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.250 13:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.512 13:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:58.512 13:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:58.512 true 00:31:58.512 13:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:58.512 13:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.773 13:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:59.035 13:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:59.035 13:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:59.035 true 00:31:59.035 13:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:59.035 13:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.295 13:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:59.556 13:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:59.556 13:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:59.556 true 00:31:59.816 13:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:31:59.816 13:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.816 13:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.078 13:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:32:00.078 13:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:32:00.078 true 00:32:00.339 13:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:00.339 13:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.339 13:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.600 13:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:32:00.601 13:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:32:00.862 true 00:32:00.862 13:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:00.862 13:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.862 13:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:01.122 13:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:32:01.122 13:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:32:01.381 true 00:32:01.381 13:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:01.381 13:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:01.642 13:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:01.642 13:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:32:01.642 13:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:32:01.902 true 00:32:01.902 13:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:01.902 13:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.163 13:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:02.163 13:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:32:02.163 13:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:32:02.424 true 00:32:02.424 13:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:02.424 13:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.685 13:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:02.946 13:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:32:02.946 13:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:32:02.946 true 00:32:02.946 13:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:02.946 13:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.208 13:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:03.468 13:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:32:03.468 13:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:32:03.468 true 00:32:03.468 13:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:03.468 13:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.729 13:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:03.990 13:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:32:03.990 13:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:32:03.990 true 00:32:03.990 13:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:03.990 13:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.251 13:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:04.512 13:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:32:04.512 13:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:32:04.512 true 00:32:04.773 13:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:04.773 13:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.773 13:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:05.034 13:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:32:05.034 13:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:32:05.294 true 00:32:05.294 13:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:05.294 13:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.294 13:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:05.555 13:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:32:05.555 13:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:32:05.815 true 00:32:05.815 13:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:05.815 13:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.076 13:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:06.076 13:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:32:06.076 13:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:32:06.337 true 00:32:06.337 13:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:06.337 13:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.597 13:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:06.597 13:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:32:06.597 13:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:32:06.858 true 00:32:06.858 13:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:06.858 13:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:07.118 13:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:07.118 13:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:32:07.119 13:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:32:07.379 true 00:32:07.379 13:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:07.379 13:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:07.640 13:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:07.900 13:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:32:07.900 13:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:32:07.900 true 00:32:07.900 13:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:07.900 13:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:08.162 13:06:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:08.422 13:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:32:08.422 13:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:32:08.422 true 00:32:08.422 13:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:08.422 13:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:08.682 13:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:08.943 13:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:32:08.943 13:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:32:09.203 true 00:32:09.203 13:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:09.203 13:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.203 13:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:09.463 13:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:32:09.463 13:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:32:09.723 true 00:32:09.723 13:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:09.723 13:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.723 13:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:09.983 13:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:32:09.983 13:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:32:10.243 true 00:32:10.243 13:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:10.243 13:06:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:10.503 13:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.503 13:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:32:10.503 13:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:32:10.764 true 00:32:10.764 13:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:10.764 13:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.025 13:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:11.286 13:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:32:11.286 13:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:32:11.286 true 00:32:11.286 13:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:11.286 13:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.547 13:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:11.808 13:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:32:11.808 13:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:32:11.808 true 00:32:11.808 13:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:11.808 13:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:12.069 13:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.330 13:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:32:12.330 13:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:32:12.330 true 00:32:12.591 13:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:12.591 13:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:12.591 13:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.851 13:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:32:12.851 13:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:32:13.112 true 00:32:13.112 13:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:13.112 13:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.112 13:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.373 13:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:32:13.373 13:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:32:13.633 true 00:32:13.633 13:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:13.633 13:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.894 13:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.894 Initializing NVMe Controllers 00:32:13.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:13.894 Controller IO queue size 128, less than required. 00:32:13.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:13.894 Initialization complete. Launching workers. 00:32:13.894 ======================================================== 00:32:13.894 Latency(us) 00:32:13.894 Device Information : IOPS MiB/s Average min max 00:32:13.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 29727.13 14.52 4305.89 1478.19 10800.56 00:32:13.894 ======================================================== 00:32:13.894 Total : 29727.13 14.52 4305.89 1478.19 10800.56 00:32:13.894 00:32:13.894 13:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:32:13.894 13:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:32:14.167 true 00:32:14.167 13:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 849879 00:32:14.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (849879) - No such process 00:32:14.167 13:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 849879 00:32:14.167 13:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:14.167 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:14.428 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:14.428 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:14.428 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:14.428 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.428 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:14.689 null0 00:32:14.689 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:14.689 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.689 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:14.689 null1 00:32:14.689 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:14.689 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.689 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:14.949 null2 00:32:14.949 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:14.949 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.949 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:15.209 null3 00:32:15.209 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.209 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.209 13:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:15.209 null4 00:32:15.209 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.209 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.209 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:15.469 null5 00:32:15.469 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.469 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.469 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:15.778 null6 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:15.778 null7 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.778 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 856096 856098 856100 856104 856107 856109 856110 856113 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.779 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:16.041 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.041 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:16.041 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:16.041 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:16.041 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:16.041 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:16.041 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:16.041 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:16.303 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.303 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.303 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:16.303 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.303 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.303 13:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.303 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.304 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:16.304 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:16.304 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:16.304 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.564 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:16.565 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.565 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.565 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:16.565 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.565 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.565 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:16.565 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.565 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.565 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.825 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:17.086 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.086 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.086 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:17.086 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:17.086 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.087 13:06:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.348 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.611 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:17.874 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.137 13:06:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:18.137 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.137 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.137 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:18.137 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.399 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:18.661 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:18.923 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.185 13:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.185 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:19.447 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:19.447 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:19.447 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.447 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.447 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:19.447 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:19.447 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.447 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.447 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.447 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.447 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:19.712 rmmod nvme_tcp 00:32:19.712 rmmod nvme_fabrics 00:32:19.712 rmmod nvme_keyring 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 849505 ']' 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 849505 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 849505 ']' 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 849505 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:19.712 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 849505 00:32:19.972 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:19.972 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:19.972 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 849505' 00:32:19.972 killing process with pid 849505 00:32:19.972 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 849505 00:32:19.972 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 849505 00:32:19.972 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:19.972 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:19.972 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:19.972 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:19.973 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:19.973 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:19.973 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:19.973 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:19.973 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:19.973 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.973 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.973 13:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.545 13:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.545 00:32:22.545 real 0m49.835s 00:32:22.545 user 3m3.050s 00:32:22.545 sys 0m22.931s 00:32:22.545 13:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.545 13:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:22.545 ************************************ 00:32:22.545 END TEST nvmf_ns_hotplug_stress 00:32:22.545 ************************************ 00:32:22.545 13:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:22.545 13:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:22.545 13:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.545 13:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:22.545 ************************************ 00:32:22.545 START TEST nvmf_delete_subsystem 00:32:22.545 ************************************ 00:32:22.545 13:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:22.545 * Looking for test storage... 00:32:22.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:22.545 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:22.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.546 --rc genhtml_branch_coverage=1 00:32:22.546 --rc genhtml_function_coverage=1 00:32:22.546 --rc genhtml_legend=1 00:32:22.546 --rc geninfo_all_blocks=1 00:32:22.546 --rc geninfo_unexecuted_blocks=1 00:32:22.546 00:32:22.546 ' 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:22.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.546 --rc genhtml_branch_coverage=1 00:32:22.546 --rc genhtml_function_coverage=1 00:32:22.546 --rc genhtml_legend=1 00:32:22.546 --rc geninfo_all_blocks=1 00:32:22.546 --rc geninfo_unexecuted_blocks=1 00:32:22.546 00:32:22.546 ' 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:22.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.546 --rc genhtml_branch_coverage=1 00:32:22.546 --rc genhtml_function_coverage=1 00:32:22.546 --rc genhtml_legend=1 00:32:22.546 --rc geninfo_all_blocks=1 00:32:22.546 --rc geninfo_unexecuted_blocks=1 00:32:22.546 00:32:22.546 ' 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:22.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.546 --rc genhtml_branch_coverage=1 00:32:22.546 --rc genhtml_function_coverage=1 00:32:22.546 --rc genhtml_legend=1 00:32:22.546 --rc geninfo_all_blocks=1 00:32:22.546 --rc geninfo_unexecuted_blocks=1 00:32:22.546 00:32:22.546 ' 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.546 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.547 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.547 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:22.547 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:22.547 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:22.547 13:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:30.712 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:30.712 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:30.712 Found net devices under 0000:31:00.0: cvl_0_0 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.712 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:30.713 Found net devices under 0000:31:00.1: cvl_0_1 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:30.713 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:30.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:32:30.974 00:32:30.974 --- 10.0.0.2 ping statistics --- 00:32:30.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.974 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:32:30.974 00:32:30.974 --- 10.0.0.1 ping statistics --- 00:32:30.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.974 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=861920 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 861920 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 861920 ']' 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.974 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.975 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.975 13:07:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:30.975 [2024-11-25 13:07:10.787697] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:30.975 [2024-11-25 13:07:10.788928] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:32:30.975 [2024-11-25 13:07:10.788987] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.235 [2024-11-25 13:07:10.879791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:31.235 [2024-11-25 13:07:10.920193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.235 [2024-11-25 13:07:10.920230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.235 [2024-11-25 13:07:10.920238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.235 [2024-11-25 13:07:10.920245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.235 [2024-11-25 13:07:10.920251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.236 [2024-11-25 13:07:10.921501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.236 [2024-11-25 13:07:10.921503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.236 [2024-11-25 13:07:10.976844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:31.236 [2024-11-25 13:07:10.977366] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:31.236 [2024-11-25 13:07:10.977710] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:31.808 [2024-11-25 13:07:11.634160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:31.808 [2024-11-25 13:07:11.662643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:31.808 NULL1 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:31.808 Delay0 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=862005 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:31.808 13:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:32.070 [2024-11-25 13:07:11.761331] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:33.985 13:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:33.985 13:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.985 13:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:34.246 Read completed with error (sct=0, sc=8) 00:32:34.246 Write completed with error (sct=0, sc=8) 00:32:34.246 starting I/O failed: -6 00:32:34.246 Read completed with error (sct=0, sc=8) 00:32:34.246 Write completed with error (sct=0, sc=8) 00:32:34.246 Read completed with error (sct=0, sc=8) 00:32:34.246 Read completed with error (sct=0, sc=8) 00:32:34.246 starting I/O failed: -6 00:32:34.246 Read completed with error (sct=0, sc=8) 00:32:34.246 Write completed with error (sct=0, sc=8) 00:32:34.246 Read completed with error (sct=0, sc=8) 00:32:34.246 Write completed with error (sct=0, sc=8) 00:32:34.246 starting I/O failed: -6 00:32:34.246 Read completed with error (sct=0, sc=8) 00:32:34.246 Write completed with error (sct=0, sc=8) 00:32:34.246 Read completed with error (sct=0, sc=8) 00:32:34.246 Read completed with error (sct=0, sc=8) 00:32:34.246 starting I/O failed: -6 00:32:34.246 Write completed with error (sct=0, sc=8) 00:32:34.246 Write completed with error (sct=0, sc=8) 00:32:34.246 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 [2024-11-25 13:07:13.963987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcd2c0 is same with the state(6) to be set 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 starting I/O failed: -6 00:32:34.247 [2024-11-25 13:07:13.965639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f197000d6b0 is same with the state(6) to be set 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Write completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:34.247 Read completed with error (sct=0, sc=8) 00:32:35.191 [2024-11-25 13:07:14.941933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfce5e0 is same with the state(6) to be set 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 [2024-11-25 13:07:14.967827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcd0e0 is same with the state(6) to be set 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 [2024-11-25 13:07:14.968502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f197000d380 is same with the state(6) to be set 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 [2024-11-25 13:07:14.968603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1970000c70 is same with the state(6) to be set 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Read completed with error (sct=0, sc=8) 00:32:35.191 Write completed with error (sct=0, sc=8) 00:32:35.191 [2024-11-25 13:07:14.968695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcd4a0 is same with the state(6) to be set 00:32:35.191 Initializing NVMe Controllers 00:32:35.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:35.192 Controller IO queue size 128, less than required. 00:32:35.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:35.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:35.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:35.192 Initialization complete. Launching workers. 00:32:35.192 ======================================================== 00:32:35.192 Latency(us) 00:32:35.192 Device Information : IOPS MiB/s Average min max 00:32:35.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.66 0.08 893182.75 233.91 1009100.10 00:32:35.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.24 0.08 978545.02 295.36 2002834.78 00:32:35.192 ======================================================== 00:32:35.192 Total : 324.89 0.16 933706.95 233.91 2002834.78 00:32:35.192 00:32:35.192 [2024-11-25 13:07:14.969332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfce5e0 (9): Bad file descriptor 00:32:35.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:35.192 13:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.192 13:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:35.192 13:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 862005 00:32:35.192 13:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:35.763 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:35.763 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 862005 00:32:35.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (862005) - No such process 00:32:35.763 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 862005 00:32:35.763 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:35.763 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 862005 00:32:35.763 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:35.763 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:35.763 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 862005 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:35.764 [2024-11-25 13:07:15.502459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=862788 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 862788 00:32:35.764 13:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:35.764 [2024-11-25 13:07:15.574933] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:36.334 13:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:36.334 13:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 862788 00:32:36.334 13:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:36.905 13:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:36.905 13:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 862788 00:32:36.905 13:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:37.166 13:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:37.166 13:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 862788 00:32:37.166 13:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:37.738 13:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:37.738 13:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 862788 00:32:37.738 13:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:38.309 13:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:38.309 13:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 862788 00:32:38.309 13:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:38.878 13:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:38.878 13:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 862788 00:32:38.878 13:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:39.139 Initializing NVMe Controllers 00:32:39.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:39.139 Controller IO queue size 128, less than required. 00:32:39.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:39.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:39.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:39.139 Initialization complete. Launching workers. 00:32:39.139 ======================================================== 00:32:39.139 Latency(us) 00:32:39.139 Device Information : IOPS MiB/s Average min max 00:32:39.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002072.36 1000251.73 1005642.06 00:32:39.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004256.30 1000402.95 1041117.94 00:32:39.139 ======================================================== 00:32:39.139 Total : 256.00 0.12 1003164.33 1000251.73 1041117.94 00:32:39.139 00:32:39.399 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:39.399 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 862788 00:32:39.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (862788) - No such process 00:32:39.399 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 862788 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:39.400 rmmod nvme_tcp 00:32:39.400 rmmod nvme_fabrics 00:32:39.400 rmmod nvme_keyring 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 861920 ']' 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 861920 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 861920 ']' 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 861920 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 861920 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 861920' 00:32:39.400 killing process with pid 861920 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 861920 00:32:39.400 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 861920 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.660 13:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.574 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:41.574 00:32:41.574 real 0m19.483s 00:32:41.574 user 0m27.000s 00:32:41.574 sys 0m8.279s 00:32:41.574 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.574 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:41.574 ************************************ 00:32:41.574 END TEST nvmf_delete_subsystem 00:32:41.574 ************************************ 00:32:41.574 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:41.574 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:41.574 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.574 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:41.837 ************************************ 00:32:41.837 START TEST nvmf_host_management 00:32:41.837 ************************************ 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:41.837 * Looking for test storage... 00:32:41.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.837 --rc genhtml_branch_coverage=1 00:32:41.837 --rc genhtml_function_coverage=1 00:32:41.837 --rc genhtml_legend=1 00:32:41.837 --rc geninfo_all_blocks=1 00:32:41.837 --rc geninfo_unexecuted_blocks=1 00:32:41.837 00:32:41.837 ' 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.837 --rc genhtml_branch_coverage=1 00:32:41.837 --rc genhtml_function_coverage=1 00:32:41.837 --rc genhtml_legend=1 00:32:41.837 --rc geninfo_all_blocks=1 00:32:41.837 --rc geninfo_unexecuted_blocks=1 00:32:41.837 00:32:41.837 ' 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.837 --rc genhtml_branch_coverage=1 00:32:41.837 --rc genhtml_function_coverage=1 00:32:41.837 --rc genhtml_legend=1 00:32:41.837 --rc geninfo_all_blocks=1 00:32:41.837 --rc geninfo_unexecuted_blocks=1 00:32:41.837 00:32:41.837 ' 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.837 --rc genhtml_branch_coverage=1 00:32:41.837 --rc genhtml_function_coverage=1 00:32:41.837 --rc genhtml_legend=1 00:32:41.837 --rc geninfo_all_blocks=1 00:32:41.837 --rc geninfo_unexecuted_blocks=1 00:32:41.837 00:32:41.837 ' 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.837 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.838 13:07:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:49.983 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:49.983 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:49.984 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:49.984 Found net devices under 0000:31:00.0: cvl_0_0 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:49.984 Found net devices under 0000:31:00.1: cvl_0_1 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.984 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.246 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.246 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.246 13:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.246 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.246 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.246 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.246 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.751 ms 00:32:50.507 00:32:50.507 --- 10.0.0.2 ping statistics --- 00:32:50.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.507 rtt min/avg/max/mdev = 0.751/0.751/0.751/0.000 ms 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:32:50.507 00:32:50.507 --- 10.0.0.1 ping statistics --- 00:32:50.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.507 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=868300 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 868300 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 868300 ']' 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.507 13:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:50.507 [2024-11-25 13:07:30.312279] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:50.507 [2024-11-25 13:07:30.313414] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:32:50.507 [2024-11-25 13:07:30.313464] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.769 [2024-11-25 13:07:30.422338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:50.769 [2024-11-25 13:07:30.475500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.769 [2024-11-25 13:07:30.475555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.769 [2024-11-25 13:07:30.475564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.769 [2024-11-25 13:07:30.475571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.769 [2024-11-25 13:07:30.475577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.769 [2024-11-25 13:07:30.477568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.769 [2024-11-25 13:07:30.477733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:50.769 [2024-11-25 13:07:30.477874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.769 [2024-11-25 13:07:30.477884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:50.769 [2024-11-25 13:07:30.553201] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:50.769 [2024-11-25 13:07:30.553831] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:50.769 [2024-11-25 13:07:30.554805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:50.769 [2024-11-25 13:07:30.554840] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:50.770 [2024-11-25 13:07:30.555037] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:51.343 [2024-11-25 13:07:31.166813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.343 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:51.343 Malloc0 00:32:51.604 [2024-11-25 13:07:31.255088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=868370 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 868370 /var/tmp/bdevperf.sock 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 868370 ']' 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:51.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:51.604 { 00:32:51.604 "params": { 00:32:51.604 "name": "Nvme$subsystem", 00:32:51.604 "trtype": "$TEST_TRANSPORT", 00:32:51.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:51.604 "adrfam": "ipv4", 00:32:51.604 "trsvcid": "$NVMF_PORT", 00:32:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:51.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:51.604 "hdgst": ${hdgst:-false}, 00:32:51.604 "ddgst": ${ddgst:-false} 00:32:51.604 }, 00:32:51.604 "method": "bdev_nvme_attach_controller" 00:32:51.604 } 00:32:51.604 EOF 00:32:51.604 )") 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:51.604 13:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:51.604 "params": { 00:32:51.604 "name": "Nvme0", 00:32:51.604 "trtype": "tcp", 00:32:51.604 "traddr": "10.0.0.2", 00:32:51.604 "adrfam": "ipv4", 00:32:51.604 "trsvcid": "4420", 00:32:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:51.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:51.604 "hdgst": false, 00:32:51.604 "ddgst": false 00:32:51.604 }, 00:32:51.604 "method": "bdev_nvme_attach_controller" 00:32:51.604 }' 00:32:51.604 [2024-11-25 13:07:31.359131] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:32:51.604 [2024-11-25 13:07:31.359184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868370 ] 00:32:51.604 [2024-11-25 13:07:31.437715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.604 [2024-11-25 13:07:31.474363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.176 Running I/O for 10 seconds... 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=600 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 600 -ge 100 ']' 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.438 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:52.439 [2024-11-25 13:07:32.226430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599770 is same with the state(6) to be set 00:32:52.439 [2024-11-25 13:07:32.226469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599770 is same with the state(6) to be set 00:32:52.439 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.439 [2024-11-25 13:07:32.231555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.439 [2024-11-25 13:07:32.231591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.231602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.439 [2024-11-25 13:07:32.231610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.231618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.439 [2024-11-25 13:07:32.231626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.231634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.439 [2024-11-25 13:07:32.231641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.231649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227c040 is same with the state(6) to be set 00:32:52.439 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:52.439 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.439 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:52.439 [2024-11-25 13:07:32.242328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227c040 (9): Bad file descriptor 00:32:52.439 [2024-11-25 13:07:32.242403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.439 [2024-11-25 13:07:32.242419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.242434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.439 [2024-11-25 13:07:32.242442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.242452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.439 [2024-11-25 13:07:32.242460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.242469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.439 [2024-11-25 13:07:32.242477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.242486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.439 [2024-11-25 13:07:32.242494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.242503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.439 [2024-11-25 13:07:32.242511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.242520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.439 [2024-11-25 13:07:32.242528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.242537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.439 [2024-11-25 13:07:32.242544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.242554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.439 [2024-11-25 13:07:32.242561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.439 [2024-11-25 13:07:32.242570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.440 [2024-11-25 13:07:32.242578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.440 [2024-11-25 13:07:32.242587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.440 [2024-11-25 13:07:32.242594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.440 [2024-11-25 13:07:32.242604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.440 [2024-11-25 13:07:32.242611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.440 [2024-11-25 13:07:32.242620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.440 [2024-11-25 13:07:32.242627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.440 [2024-11-25 13:07:32.242638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.440 [2024-11-25 13:07:32.242646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.440 [2024-11-25 13:07:32.242655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.440 [2024-11-25 13:07:32.242662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.440 [2024-11-25 13:07:32.242671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.440 [2024-11-25 13:07:32.242678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.440 [2024-11-25 13:07:32.242688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.440 [2024-11-25 13:07:32.242696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.440 [2024-11-25 13:07:32.242705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.440 [2024-11-25 13:07:32.242712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.242985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.242995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.441 [2024-11-25 13:07:32.243002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.441 [2024-11-25 13:07:32.243011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.442 [2024-11-25 13:07:32.243018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.442 [2024-11-25 13:07:32.243029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.442 [2024-11-25 13:07:32.243036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.442 [2024-11-25 13:07:32.243046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.442 [2024-11-25 13:07:32.243053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.442 [2024-11-25 13:07:32.243064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.442 [2024-11-25 13:07:32.243072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.442 [2024-11-25 13:07:32.243081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.442 [2024-11-25 13:07:32.243088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.442 [2024-11-25 13:07:32.243098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.442 [2024-11-25 13:07:32.243105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.442 [2024-11-25 13:07:32.243115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.442 [2024-11-25 13:07:32.243122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.442 [2024-11-25 13:07:32.243131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.442 [2024-11-25 13:07:32.243138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.442 [2024-11-25 13:07:32.243148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.442 [2024-11-25 13:07:32.243156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.442 [2024-11-25 13:07:32.243165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.442 [2024-11-25 13:07:32.243172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.443 [2024-11-25 13:07:32.243444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.443 [2024-11-25 13:07:32.243467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.443 [2024-11-25 13:07:32.243476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.444 [2024-11-25 13:07:32.243484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.444 [2024-11-25 13:07:32.243495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.444 [2024-11-25 13:07:32.243502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.444 13:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:52.444 [2024-11-25 13:07:32.244719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:52.444 task offset: 90112 on job bdev=Nvme0n1 fails 00:32:52.444 00:32:52.444 Latency(us) 00:32:52.444 [2024-11-25T12:07:32.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.444 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:52.444 Job: Nvme0n1 ended in about 0.46 seconds with error 00:32:52.444 Verification LBA range: start 0x0 length 0x400 00:32:52.444 Nvme0n1 : 0.46 1517.49 94.84 137.95 0.00 37564.63 1529.17 32986.45 00:32:52.444 [2024-11-25T12:07:32.347Z] =================================================================================================================== 00:32:52.444 [2024-11-25T12:07:32.347Z] Total : 1517.49 94.84 137.95 0.00 37564.63 1529.17 32986.45 00:32:52.444 [2024-11-25 13:07:32.246708] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:52.705 [2024-11-25 13:07:32.340961] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 868370 00:32:53.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (868370) - No such process 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:53.644 { 00:32:53.644 "params": { 00:32:53.644 "name": "Nvme$subsystem", 00:32:53.644 "trtype": "$TEST_TRANSPORT", 00:32:53.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:53.644 "adrfam": "ipv4", 00:32:53.644 "trsvcid": "$NVMF_PORT", 00:32:53.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:53.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:53.644 "hdgst": ${hdgst:-false}, 00:32:53.644 "ddgst": ${ddgst:-false} 00:32:53.644 }, 00:32:53.644 "method": "bdev_nvme_attach_controller" 00:32:53.644 } 00:32:53.644 EOF 00:32:53.644 )") 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:53.644 13:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:53.644 "params": { 00:32:53.644 "name": "Nvme0", 00:32:53.644 "trtype": "tcp", 00:32:53.644 "traddr": "10.0.0.2", 00:32:53.644 "adrfam": "ipv4", 00:32:53.644 "trsvcid": "4420", 00:32:53.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:53.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:53.644 "hdgst": false, 00:32:53.644 "ddgst": false 00:32:53.644 }, 00:32:53.644 "method": "bdev_nvme_attach_controller" 00:32:53.644 }' 00:32:53.644 [2024-11-25 13:07:33.301975] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:32:53.644 [2024-11-25 13:07:33.302034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868772 ] 00:32:53.644 [2024-11-25 13:07:33.378251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.644 [2024-11-25 13:07:33.413799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.903 Running I/O for 1 seconds... 00:32:54.842 1815.00 IOPS, 113.44 MiB/s 00:32:54.842 Latency(us) 00:32:54.842 [2024-11-25T12:07:34.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.842 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:54.842 Verification LBA range: start 0x0 length 0x400 00:32:54.842 Nvme0n1 : 1.01 1856.76 116.05 0.00 0.00 33743.98 3768.32 35170.99 00:32:54.842 [2024-11-25T12:07:34.745Z] =================================================================================================================== 00:32:54.842 [2024-11-25T12:07:34.745Z] Total : 1856.76 116.05 0.00 0.00 33743.98 3768.32 35170.99 00:32:54.842 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:54.842 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:54.842 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:54.842 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:54.842 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:54.842 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:54.842 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:54.842 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.842 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:54.842 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.842 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:55.101 rmmod nvme_tcp 00:32:55.101 rmmod nvme_fabrics 00:32:55.101 rmmod nvme_keyring 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 868300 ']' 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 868300 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 868300 ']' 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 868300 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 868300 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:55.101 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 868300' 00:32:55.101 killing process with pid 868300 00:32:55.102 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 868300 00:32:55.102 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 868300 00:32:55.102 [2024-11-25 13:07:34.967497] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:55.102 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:55.102 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:55.102 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:55.102 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:55.102 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:55.102 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:55.102 13:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:55.102 13:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:55.102 13:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:55.102 13:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.102 13:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:55.102 13:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:57.642 00:32:57.642 real 0m15.589s 00:32:57.642 user 0m19.179s 00:32:57.642 sys 0m8.248s 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:57.642 ************************************ 00:32:57.642 END TEST nvmf_host_management 00:32:57.642 ************************************ 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:57.642 ************************************ 00:32:57.642 START TEST nvmf_lvol 00:32:57.642 ************************************ 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:57.642 * Looking for test storage... 00:32:57.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:57.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.642 --rc genhtml_branch_coverage=1 00:32:57.642 --rc genhtml_function_coverage=1 00:32:57.642 --rc genhtml_legend=1 00:32:57.642 --rc geninfo_all_blocks=1 00:32:57.642 --rc geninfo_unexecuted_blocks=1 00:32:57.642 00:32:57.642 ' 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:57.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.642 --rc genhtml_branch_coverage=1 00:32:57.642 --rc genhtml_function_coverage=1 00:32:57.642 --rc genhtml_legend=1 00:32:57.642 --rc geninfo_all_blocks=1 00:32:57.642 --rc geninfo_unexecuted_blocks=1 00:32:57.642 00:32:57.642 ' 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:57.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.642 --rc genhtml_branch_coverage=1 00:32:57.642 --rc genhtml_function_coverage=1 00:32:57.642 --rc genhtml_legend=1 00:32:57.642 --rc geninfo_all_blocks=1 00:32:57.642 --rc geninfo_unexecuted_blocks=1 00:32:57.642 00:32:57.642 ' 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:57.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.642 --rc genhtml_branch_coverage=1 00:32:57.642 --rc genhtml_function_coverage=1 00:32:57.642 --rc genhtml_legend=1 00:32:57.642 --rc geninfo_all_blocks=1 00:32:57.642 --rc geninfo_unexecuted_blocks=1 00:32:57.642 00:32:57.642 ' 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.642 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.643 13:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:05.785 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:05.785 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:05.785 Found net devices under 0000:31:00.0: cvl_0_0 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.785 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:05.786 Found net devices under 0000:31:00.1: cvl_0_1 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:05.786 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:06.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:33:06.047 00:33:06.047 --- 10.0.0.2 ping statistics --- 00:33:06.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.047 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:33:06.047 00:33:06.047 --- 10.0.0.1 ping statistics --- 00:33:06.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.047 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.047 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=873846 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 873846 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 873846 ']' 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.048 13:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:06.310 [2024-11-25 13:07:45.963295] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:06.310 [2024-11-25 13:07:45.964452] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:33:06.310 [2024-11-25 13:07:45.964507] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:06.310 [2024-11-25 13:07:46.056610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:06.310 [2024-11-25 13:07:46.097511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:06.310 [2024-11-25 13:07:46.097550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:06.310 [2024-11-25 13:07:46.097558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:06.310 [2024-11-25 13:07:46.097565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:06.310 [2024-11-25 13:07:46.097571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:06.310 [2024-11-25 13:07:46.098984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:06.310 [2024-11-25 13:07:46.099212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:06.310 [2024-11-25 13:07:46.099217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.310 [2024-11-25 13:07:46.154486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:06.310 [2024-11-25 13:07:46.155099] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:06.310 [2024-11-25 13:07:46.155365] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:06.310 [2024-11-25 13:07:46.155567] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:06.882 13:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:06.882 13:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:33:06.882 13:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:06.882 13:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:06.882 13:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:07.142 13:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:07.142 13:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:07.142 [2024-11-25 13:07:46.951754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.142 13:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:07.403 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:07.403 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:07.665 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:07.665 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:07.665 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:07.925 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bd477135-c199-4cbb-8db7-03e2c2eecd6d 00:33:07.925 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bd477135-c199-4cbb-8db7-03e2c2eecd6d lvol 20 00:33:08.186 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9bce5253-b101-4d66-9f93-85b84f363ca1 00:33:08.186 13:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:08.186 13:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9bce5253-b101-4d66-9f93-85b84f363ca1 00:33:08.446 13:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:08.707 [2024-11-25 13:07:48.355883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:08.707 13:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:08.707 13:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=874419 00:33:08.707 13:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:08.707 13:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:09.743 13:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9bce5253-b101-4d66-9f93-85b84f363ca1 MY_SNAPSHOT 00:33:10.003 13:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=100e73ea-87a8-4344-9a44-5e710445d4c1 00:33:10.003 13:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9bce5253-b101-4d66-9f93-85b84f363ca1 30 00:33:10.264 13:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 100e73ea-87a8-4344-9a44-5e710445d4c1 MY_CLONE 00:33:10.524 13:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e83c94ba-ff80-429e-a175-039048cec34a 00:33:10.524 13:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e83c94ba-ff80-429e-a175-039048cec34a 00:33:10.784 13:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 874419 00:33:20.782 Initializing NVMe Controllers 00:33:20.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:20.782 Controller IO queue size 128, less than required. 00:33:20.782 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:20.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:20.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:20.782 Initialization complete. Launching workers. 00:33:20.782 ======================================================== 00:33:20.782 Latency(us) 00:33:20.782 Device Information : IOPS MiB/s Average min max 00:33:20.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12369.40 48.32 10350.14 1497.60 52705.11 00:33:20.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15859.30 61.95 8071.05 533.19 56583.94 00:33:20.782 ======================================================== 00:33:20.782 Total : 28228.70 110.27 9069.71 533.19 56583.94 00:33:20.783 00:33:20.783 13:07:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9bce5253-b101-4d66-9f93-85b84f363ca1 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bd477135-c199-4cbb-8db7-03e2c2eecd6d 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.783 rmmod nvme_tcp 00:33:20.783 rmmod nvme_fabrics 00:33:20.783 rmmod nvme_keyring 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 873846 ']' 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 873846 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 873846 ']' 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 873846 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 873846 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 873846' 00:33:20.783 killing process with pid 873846 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 873846 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 873846 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.783 13:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.166 13:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.166 00:33:22.166 real 0m24.668s 00:33:22.166 user 0m55.811s 00:33:22.166 sys 0m11.244s 00:33:22.166 13:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.166 13:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:22.166 ************************************ 00:33:22.166 END TEST nvmf_lvol 00:33:22.166 ************************************ 00:33:22.166 13:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:22.166 13:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:22.166 13:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.166 13:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.166 ************************************ 00:33:22.166 START TEST nvmf_lvs_grow 00:33:22.166 ************************************ 00:33:22.166 13:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:22.166 * Looking for test storage... 00:33:22.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.166 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:22.166 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:33:22.166 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.427 --rc genhtml_branch_coverage=1 00:33:22.427 --rc genhtml_function_coverage=1 00:33:22.427 --rc genhtml_legend=1 00:33:22.427 --rc geninfo_all_blocks=1 00:33:22.427 --rc geninfo_unexecuted_blocks=1 00:33:22.427 00:33:22.427 ' 00:33:22.427 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.428 --rc genhtml_branch_coverage=1 00:33:22.428 --rc genhtml_function_coverage=1 00:33:22.428 --rc genhtml_legend=1 00:33:22.428 --rc geninfo_all_blocks=1 00:33:22.428 --rc geninfo_unexecuted_blocks=1 00:33:22.428 00:33:22.428 ' 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:22.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.428 --rc genhtml_branch_coverage=1 00:33:22.428 --rc genhtml_function_coverage=1 00:33:22.428 --rc genhtml_legend=1 00:33:22.428 --rc geninfo_all_blocks=1 00:33:22.428 --rc geninfo_unexecuted_blocks=1 00:33:22.428 00:33:22.428 ' 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:22.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.428 --rc genhtml_branch_coverage=1 00:33:22.428 --rc genhtml_function_coverage=1 00:33:22.428 --rc genhtml_legend=1 00:33:22.428 --rc geninfo_all_blocks=1 00:33:22.428 --rc geninfo_unexecuted_blocks=1 00:33:22.428 00:33:22.428 ' 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.428 13:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:30.568 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:30.568 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.568 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:30.569 Found net devices under 0000:31:00.0: cvl_0_0 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:30.569 Found net devices under 0000:31:00.1: cvl_0_1 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:30.569 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:30.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:33:30.830 00:33:30.830 --- 10.0.0.2 ping statistics --- 00:33:30.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.830 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:33:30.830 00:33:30.830 --- 10.0.0.1 ping statistics --- 00:33:30.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.830 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=881115 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 881115 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 881115 ']' 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.830 13:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:30.830 [2024-11-25 13:08:10.631220] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:30.830 [2024-11-25 13:08:10.632829] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:33:30.830 [2024-11-25 13:08:10.632912] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.830 [2024-11-25 13:08:10.724584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.091 [2024-11-25 13:08:10.765383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.091 [2024-11-25 13:08:10.765420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.091 [2024-11-25 13:08:10.765439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.091 [2024-11-25 13:08:10.765446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.091 [2024-11-25 13:08:10.765451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.091 [2024-11-25 13:08:10.766053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.091 [2024-11-25 13:08:10.821987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:31.091 [2024-11-25 13:08:10.822234] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:31.827 [2024-11-25 13:08:11.626824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:31.827 ************************************ 00:33:31.827 START TEST lvs_grow_clean 00:33:31.827 ************************************ 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:31.827 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:31.828 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:32.087 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:32.087 13:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:32.350 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:32.350 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:32.350 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:32.350 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:32.350 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:32.350 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a56c3b22-3b0d-467c-8205-23704c6d7c01 lvol 150 00:33:32.610 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a2fac223-37c7-4548-b924-54a7f95b6841 00:33:32.610 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:32.610 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:32.871 [2024-11-25 13:08:12.546512] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:32.871 [2024-11-25 13:08:12.546658] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:32.871 true 00:33:32.871 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:32.871 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:32.871 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:32.871 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:33.132 13:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a2fac223-37c7-4548-b924-54a7f95b6841 00:33:33.393 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:33.393 [2024-11-25 13:08:13.215059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.393 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:33.654 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=881824 00:33:33.654 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:33.654 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:33.654 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 881824 /var/tmp/bdevperf.sock 00:33:33.654 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 881824 ']' 00:33:33.654 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:33.654 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:33.654 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:33.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:33.654 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:33.654 13:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:33.654 [2024-11-25 13:08:13.444023] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:33:33.654 [2024-11-25 13:08:13.444084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881824 ] 00:33:33.654 [2024-11-25 13:08:13.539609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.914 [2024-11-25 13:08:13.578098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.486 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:34.486 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:33:34.486 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:34.748 Nvme0n1 00:33:34.748 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:34.748 [ 00:33:34.748 { 00:33:34.748 "name": "Nvme0n1", 00:33:34.748 "aliases": [ 00:33:34.748 "a2fac223-37c7-4548-b924-54a7f95b6841" 00:33:34.748 ], 00:33:34.748 "product_name": "NVMe disk", 00:33:34.748 "block_size": 4096, 00:33:34.748 "num_blocks": 38912, 00:33:34.748 "uuid": "a2fac223-37c7-4548-b924-54a7f95b6841", 00:33:34.748 "numa_id": 0, 00:33:34.748 "assigned_rate_limits": { 00:33:34.748 "rw_ios_per_sec": 0, 00:33:34.748 "rw_mbytes_per_sec": 0, 00:33:34.748 "r_mbytes_per_sec": 0, 00:33:34.748 "w_mbytes_per_sec": 0 00:33:34.748 }, 00:33:34.748 "claimed": false, 00:33:34.748 "zoned": false, 00:33:34.748 "supported_io_types": { 00:33:34.748 "read": true, 00:33:34.748 "write": true, 00:33:34.748 "unmap": true, 00:33:34.748 "flush": true, 00:33:34.748 "reset": true, 00:33:34.748 "nvme_admin": true, 00:33:34.748 "nvme_io": true, 00:33:34.748 "nvme_io_md": false, 00:33:34.748 "write_zeroes": true, 00:33:34.748 "zcopy": false, 00:33:34.748 "get_zone_info": false, 00:33:34.748 "zone_management": false, 00:33:34.748 "zone_append": false, 00:33:34.748 "compare": true, 00:33:34.748 "compare_and_write": true, 00:33:34.748 "abort": true, 00:33:34.748 "seek_hole": false, 00:33:34.748 "seek_data": false, 00:33:34.748 "copy": true, 00:33:34.748 "nvme_iov_md": false 00:33:34.748 }, 00:33:34.748 "memory_domains": [ 00:33:34.748 { 00:33:34.748 "dma_device_id": "system", 00:33:34.748 "dma_device_type": 1 00:33:34.748 } 00:33:34.748 ], 00:33:34.748 "driver_specific": { 00:33:34.748 "nvme": [ 00:33:34.748 { 00:33:34.748 "trid": { 00:33:34.748 "trtype": "TCP", 00:33:34.748 "adrfam": "IPv4", 00:33:34.748 "traddr": "10.0.0.2", 00:33:34.748 "trsvcid": "4420", 00:33:34.748 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:34.748 }, 00:33:34.748 "ctrlr_data": { 00:33:34.748 "cntlid": 1, 00:33:34.748 "vendor_id": "0x8086", 00:33:34.748 "model_number": "SPDK bdev Controller", 00:33:34.748 "serial_number": "SPDK0", 00:33:34.748 "firmware_revision": "25.01", 00:33:34.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.748 "oacs": { 00:33:34.748 "security": 0, 00:33:34.748 "format": 0, 00:33:34.748 "firmware": 0, 00:33:34.748 "ns_manage": 0 00:33:34.748 }, 00:33:34.748 "multi_ctrlr": true, 00:33:34.748 "ana_reporting": false 00:33:34.748 }, 00:33:34.748 "vs": { 00:33:34.748 "nvme_version": "1.3" 00:33:34.748 }, 00:33:34.748 "ns_data": { 00:33:34.748 "id": 1, 00:33:34.748 "can_share": true 00:33:34.748 } 00:33:34.748 } 00:33:34.748 ], 00:33:34.748 "mp_policy": "active_passive" 00:33:34.748 } 00:33:34.748 } 00:33:34.748 ] 00:33:34.748 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=881928 00:33:34.748 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:34.748 13:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:35.009 Running I/O for 10 seconds... 00:33:35.951 Latency(us) 00:33:35.951 [2024-11-25T12:08:15.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:35.951 Nvme0n1 : 1.00 17790.00 69.49 0.00 0.00 0.00 0.00 0.00 00:33:35.951 [2024-11-25T12:08:15.854Z] =================================================================================================================== 00:33:35.951 [2024-11-25T12:08:15.854Z] Total : 17790.00 69.49 0.00 0.00 0.00 0.00 0.00 00:33:35.951 00:33:36.892 13:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:36.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:36.892 Nvme0n1 : 2.00 17912.00 69.97 0.00 0.00 0.00 0.00 0.00 00:33:36.892 [2024-11-25T12:08:16.795Z] =================================================================================================================== 00:33:36.892 [2024-11-25T12:08:16.795Z] Total : 17912.00 69.97 0.00 0.00 0.00 0.00 0.00 00:33:36.892 00:33:37.153 true 00:33:37.153 13:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:37.153 13:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:37.153 13:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:37.153 13:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:37.153 13:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 881928 00:33:38.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:38.096 Nvme0n1 : 3.00 17952.67 70.13 0.00 0.00 0.00 0.00 0.00 00:33:38.096 [2024-11-25T12:08:17.999Z] =================================================================================================================== 00:33:38.096 [2024-11-25T12:08:17.999Z] Total : 17952.67 70.13 0.00 0.00 0.00 0.00 0.00 00:33:38.096 00:33:39.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.039 Nvme0n1 : 4.00 17989.00 70.27 0.00 0.00 0.00 0.00 0.00 00:33:39.039 [2024-11-25T12:08:18.942Z] =================================================================================================================== 00:33:39.039 [2024-11-25T12:08:18.942Z] Total : 17989.00 70.27 0.00 0.00 0.00 0.00 0.00 00:33:39.039 00:33:39.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.981 Nvme0n1 : 5.00 18010.60 70.35 0.00 0.00 0.00 0.00 0.00 00:33:39.981 [2024-11-25T12:08:19.884Z] =================================================================================================================== 00:33:39.981 [2024-11-25T12:08:19.884Z] Total : 18010.60 70.35 0.00 0.00 0.00 0.00 0.00 00:33:39.981 00:33:40.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:40.926 Nvme0n1 : 6.00 18035.67 70.45 0.00 0.00 0.00 0.00 0.00 00:33:40.926 [2024-11-25T12:08:20.829Z] =================================================================================================================== 00:33:40.926 [2024-11-25T12:08:20.829Z] Total : 18035.67 70.45 0.00 0.00 0.00 0.00 0.00 00:33:40.926 00:33:41.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:41.866 Nvme0n1 : 7.00 18053.57 70.52 0.00 0.00 0.00 0.00 0.00 00:33:41.866 [2024-11-25T12:08:21.769Z] =================================================================================================================== 00:33:41.866 [2024-11-25T12:08:21.769Z] Total : 18053.57 70.52 0.00 0.00 0.00 0.00 0.00 00:33:41.866 00:33:43.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:43.251 Nvme0n1 : 8.00 18067.00 70.57 0.00 0.00 0.00 0.00 0.00 00:33:43.251 [2024-11-25T12:08:23.154Z] =================================================================================================================== 00:33:43.251 [2024-11-25T12:08:23.154Z] Total : 18067.00 70.57 0.00 0.00 0.00 0.00 0.00 00:33:43.251 00:33:44.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:44.193 Nvme0n1 : 9.00 18077.44 70.62 0.00 0.00 0.00 0.00 0.00 00:33:44.193 [2024-11-25T12:08:24.096Z] =================================================================================================================== 00:33:44.193 [2024-11-25T12:08:24.096Z] Total : 18077.44 70.62 0.00 0.00 0.00 0.00 0.00 00:33:44.193 00:33:45.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:45.136 Nvme0n1 : 10.00 18085.80 70.65 0.00 0.00 0.00 0.00 0.00 00:33:45.136 [2024-11-25T12:08:25.039Z] =================================================================================================================== 00:33:45.136 [2024-11-25T12:08:25.039Z] Total : 18085.80 70.65 0.00 0.00 0.00 0.00 0.00 00:33:45.136 00:33:45.136 00:33:45.136 Latency(us) 00:33:45.136 [2024-11-25T12:08:25.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:45.136 Nvme0n1 : 10.01 18085.67 70.65 0.00 0.00 7075.71 2525.87 13544.11 00:33:45.136 [2024-11-25T12:08:25.039Z] =================================================================================================================== 00:33:45.136 [2024-11-25T12:08:25.039Z] Total : 18085.67 70.65 0.00 0.00 7075.71 2525.87 13544.11 00:33:45.136 { 00:33:45.136 "results": [ 00:33:45.136 { 00:33:45.136 "job": "Nvme0n1", 00:33:45.136 "core_mask": "0x2", 00:33:45.136 "workload": "randwrite", 00:33:45.136 "status": "finished", 00:33:45.136 "queue_depth": 128, 00:33:45.136 "io_size": 4096, 00:33:45.136 "runtime": 10.007151, 00:33:45.136 "iops": 18085.66693957151, 00:33:45.136 "mibps": 70.64713648270121, 00:33:45.136 "io_failed": 0, 00:33:45.136 "io_timeout": 0, 00:33:45.136 "avg_latency_us": 7075.709302597991, 00:33:45.136 "min_latency_us": 2525.866666666667, 00:33:45.136 "max_latency_us": 13544.106666666667 00:33:45.136 } 00:33:45.136 ], 00:33:45.136 "core_count": 1 00:33:45.136 } 00:33:45.136 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 881824 00:33:45.136 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 881824 ']' 00:33:45.136 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 881824 00:33:45.136 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:45.136 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.136 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 881824 00:33:45.136 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:45.136 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:45.136 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 881824' 00:33:45.136 killing process with pid 881824 00:33:45.136 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 881824 00:33:45.136 Received shutdown signal, test time was about 10.000000 seconds 00:33:45.136 00:33:45.136 Latency(us) 00:33:45.136 [2024-11-25T12:08:25.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.136 [2024-11-25T12:08:25.039Z] =================================================================================================================== 00:33:45.136 [2024-11-25T12:08:25.039Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:45.136 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 881824 00:33:45.137 13:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:45.397 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:45.659 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:45.659 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:45.659 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:45.659 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:45.659 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:45.920 [2024-11-25 13:08:25.634443] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:45.920 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:46.182 request: 00:33:46.182 { 00:33:46.182 "uuid": "a56c3b22-3b0d-467c-8205-23704c6d7c01", 00:33:46.182 "method": "bdev_lvol_get_lvstores", 00:33:46.182 "req_id": 1 00:33:46.182 } 00:33:46.182 Got JSON-RPC error response 00:33:46.182 response: 00:33:46.182 { 00:33:46.182 "code": -19, 00:33:46.182 "message": "No such device" 00:33:46.182 } 00:33:46.182 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:46.182 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:46.182 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:46.182 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:46.182 13:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:46.182 aio_bdev 00:33:46.182 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a2fac223-37c7-4548-b924-54a7f95b6841 00:33:46.182 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a2fac223-37c7-4548-b924-54a7f95b6841 00:33:46.182 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:46.182 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:46.182 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:46.182 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:46.182 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:46.443 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a2fac223-37c7-4548-b924-54a7f95b6841 -t 2000 00:33:46.443 [ 00:33:46.443 { 00:33:46.443 "name": "a2fac223-37c7-4548-b924-54a7f95b6841", 00:33:46.443 "aliases": [ 00:33:46.443 "lvs/lvol" 00:33:46.443 ], 00:33:46.443 "product_name": "Logical Volume", 00:33:46.443 "block_size": 4096, 00:33:46.443 "num_blocks": 38912, 00:33:46.443 "uuid": "a2fac223-37c7-4548-b924-54a7f95b6841", 00:33:46.443 "assigned_rate_limits": { 00:33:46.443 "rw_ios_per_sec": 0, 00:33:46.443 "rw_mbytes_per_sec": 0, 00:33:46.443 "r_mbytes_per_sec": 0, 00:33:46.443 "w_mbytes_per_sec": 0 00:33:46.443 }, 00:33:46.443 "claimed": false, 00:33:46.443 "zoned": false, 00:33:46.443 "supported_io_types": { 00:33:46.443 "read": true, 00:33:46.443 "write": true, 00:33:46.443 "unmap": true, 00:33:46.443 "flush": false, 00:33:46.443 "reset": true, 00:33:46.443 "nvme_admin": false, 00:33:46.443 "nvme_io": false, 00:33:46.443 "nvme_io_md": false, 00:33:46.443 "write_zeroes": true, 00:33:46.443 "zcopy": false, 00:33:46.443 "get_zone_info": false, 00:33:46.443 "zone_management": false, 00:33:46.443 "zone_append": false, 00:33:46.443 "compare": false, 00:33:46.443 "compare_and_write": false, 00:33:46.443 "abort": false, 00:33:46.443 "seek_hole": true, 00:33:46.443 "seek_data": true, 00:33:46.443 "copy": false, 00:33:46.443 "nvme_iov_md": false 00:33:46.443 }, 00:33:46.443 "driver_specific": { 00:33:46.443 "lvol": { 00:33:46.443 "lvol_store_uuid": "a56c3b22-3b0d-467c-8205-23704c6d7c01", 00:33:46.443 "base_bdev": "aio_bdev", 00:33:46.443 "thin_provision": false, 00:33:46.443 "num_allocated_clusters": 38, 00:33:46.443 "snapshot": false, 00:33:46.443 "clone": false, 00:33:46.443 "esnap_clone": false 00:33:46.443 } 00:33:46.443 } 00:33:46.443 } 00:33:46.443 ] 00:33:46.443 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:46.443 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:46.443 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:46.704 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:46.704 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:46.704 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:46.964 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:46.964 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a2fac223-37c7-4548-b924-54a7f95b6841 00:33:46.964 13:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a56c3b22-3b0d-467c-8205-23704c6d7c01 00:33:47.224 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:47.485 00:33:47.485 real 0m15.539s 00:33:47.485 user 0m15.251s 00:33:47.485 sys 0m1.355s 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:47.485 ************************************ 00:33:47.485 END TEST lvs_grow_clean 00:33:47.485 ************************************ 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:47.485 ************************************ 00:33:47.485 START TEST lvs_grow_dirty 00:33:47.485 ************************************ 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:47.485 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:47.746 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:47.746 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:48.007 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b9475b45-ef63-439c-929f-ecc287508cc8 00:33:48.007 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9475b45-ef63-439c-929f-ecc287508cc8 00:33:48.007 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:48.007 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:48.007 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:48.007 13:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b9475b45-ef63-439c-929f-ecc287508cc8 lvol 150 00:33:48.267 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7d79a3bb-ee3e-45cd-b00d-ae202293ca44 00:33:48.267 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:48.267 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:48.526 [2024-11-25 13:08:28.210514] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:48.527 [2024-11-25 13:08:28.210680] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:48.527 true 00:33:48.527 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:48.527 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9475b45-ef63-439c-929f-ecc287508cc8 00:33:48.527 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:48.527 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:48.786 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d79a3bb-ee3e-45cd-b00d-ae202293ca44 00:33:49.046 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:49.046 [2024-11-25 13:08:28.874652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.046 13:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:49.307 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=884707 00:33:49.307 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:49.307 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:49.307 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 884707 /var/tmp/bdevperf.sock 00:33:49.307 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 884707 ']' 00:33:49.307 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:49.307 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.307 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:49.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:49.307 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.307 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:49.307 [2024-11-25 13:08:29.119773] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:33:49.307 [2024-11-25 13:08:29.119832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884707 ] 00:33:49.567 [2024-11-25 13:08:29.210665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.567 [2024-11-25 13:08:29.240850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.139 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.139 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:50.139 13:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:50.399 Nvme0n1 00:33:50.399 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:50.661 [ 00:33:50.661 { 00:33:50.661 "name": "Nvme0n1", 00:33:50.661 "aliases": [ 00:33:50.661 "7d79a3bb-ee3e-45cd-b00d-ae202293ca44" 00:33:50.661 ], 00:33:50.661 "product_name": "NVMe disk", 00:33:50.661 "block_size": 4096, 00:33:50.661 "num_blocks": 38912, 00:33:50.661 "uuid": "7d79a3bb-ee3e-45cd-b00d-ae202293ca44", 00:33:50.661 "numa_id": 0, 00:33:50.661 "assigned_rate_limits": { 00:33:50.661 "rw_ios_per_sec": 0, 00:33:50.661 "rw_mbytes_per_sec": 0, 00:33:50.661 "r_mbytes_per_sec": 0, 00:33:50.661 "w_mbytes_per_sec": 0 00:33:50.661 }, 00:33:50.661 "claimed": false, 00:33:50.661 "zoned": false, 00:33:50.661 "supported_io_types": { 00:33:50.661 "read": true, 00:33:50.661 "write": true, 00:33:50.661 "unmap": true, 00:33:50.661 "flush": true, 00:33:50.661 "reset": true, 00:33:50.661 "nvme_admin": true, 00:33:50.661 "nvme_io": true, 00:33:50.661 "nvme_io_md": false, 00:33:50.661 "write_zeroes": true, 00:33:50.661 "zcopy": false, 00:33:50.661 "get_zone_info": false, 00:33:50.661 "zone_management": false, 00:33:50.661 "zone_append": false, 00:33:50.661 "compare": true, 00:33:50.661 "compare_and_write": true, 00:33:50.661 "abort": true, 00:33:50.661 "seek_hole": false, 00:33:50.661 "seek_data": false, 00:33:50.661 "copy": true, 00:33:50.661 "nvme_iov_md": false 00:33:50.661 }, 00:33:50.661 "memory_domains": [ 00:33:50.661 { 00:33:50.661 "dma_device_id": "system", 00:33:50.661 "dma_device_type": 1 00:33:50.661 } 00:33:50.661 ], 00:33:50.661 "driver_specific": { 00:33:50.661 "nvme": [ 00:33:50.661 { 00:33:50.661 "trid": { 00:33:50.661 "trtype": "TCP", 00:33:50.661 "adrfam": "IPv4", 00:33:50.661 "traddr": "10.0.0.2", 00:33:50.661 "trsvcid": "4420", 00:33:50.661 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:50.661 }, 00:33:50.661 "ctrlr_data": { 00:33:50.661 "cntlid": 1, 00:33:50.661 "vendor_id": "0x8086", 00:33:50.661 "model_number": "SPDK bdev Controller", 00:33:50.661 "serial_number": "SPDK0", 00:33:50.661 "firmware_revision": "25.01", 00:33:50.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:50.661 "oacs": { 00:33:50.661 "security": 0, 00:33:50.661 "format": 0, 00:33:50.661 "firmware": 0, 00:33:50.661 "ns_manage": 0 00:33:50.661 }, 00:33:50.661 "multi_ctrlr": true, 00:33:50.661 "ana_reporting": false 00:33:50.661 }, 00:33:50.661 "vs": { 00:33:50.661 "nvme_version": "1.3" 00:33:50.661 }, 00:33:50.661 "ns_data": { 00:33:50.661 "id": 1, 00:33:50.661 "can_share": true 00:33:50.661 } 00:33:50.661 } 00:33:50.661 ], 00:33:50.661 "mp_policy": "active_passive" 00:33:50.661 } 00:33:50.661 } 00:33:50.661 ] 00:33:50.661 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=884914 00:33:50.661 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:50.661 13:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:50.661 Running I/O for 10 seconds... 00:33:51.602 Latency(us) 00:33:51.602 [2024-11-25T12:08:31.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:51.602 Nvme0n1 : 1.00 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:33:51.602 [2024-11-25T12:08:31.505Z] =================================================================================================================== 00:33:51.602 [2024-11-25T12:08:31.505Z] Total : 17783.00 69.46 0.00 0.00 0.00 0.00 0.00 00:33:51.602 00:33:52.543 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b9475b45-ef63-439c-929f-ecc287508cc8 00:33:52.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:52.543 Nvme0n1 : 2.00 17908.50 69.96 0.00 0.00 0.00 0.00 0.00 00:33:52.543 [2024-11-25T12:08:32.446Z] =================================================================================================================== 00:33:52.543 [2024-11-25T12:08:32.446Z] Total : 17908.50 69.96 0.00 0.00 0.00 0.00 0.00 00:33:52.543 00:33:52.803 true 00:33:52.803 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9475b45-ef63-439c-929f-ecc287508cc8 00:33:52.803 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:52.803 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:52.803 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:52.803 13:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 884914 00:33:53.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:53.745 Nvme0n1 : 3.00 17950.33 70.12 0.00 0.00 0.00 0.00 0.00 00:33:53.745 [2024-11-25T12:08:33.648Z] =================================================================================================================== 00:33:53.745 [2024-11-25T12:08:33.648Z] Total : 17950.33 70.12 0.00 0.00 0.00 0.00 0.00 00:33:53.745 00:33:54.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:54.687 Nvme0n1 : 4.00 18003.00 70.32 0.00 0.00 0.00 0.00 0.00 00:33:54.687 [2024-11-25T12:08:34.590Z] =================================================================================================================== 00:33:54.687 [2024-11-25T12:08:34.590Z] Total : 18003.00 70.32 0.00 0.00 0.00 0.00 0.00 00:33:54.687 00:33:55.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:55.629 Nvme0n1 : 5.00 18022.00 70.40 0.00 0.00 0.00 0.00 0.00 00:33:55.629 [2024-11-25T12:08:35.532Z] =================================================================================================================== 00:33:55.629 [2024-11-25T12:08:35.532Z] Total : 18022.00 70.40 0.00 0.00 0.00 0.00 0.00 00:33:55.629 00:33:56.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:56.568 Nvme0n1 : 6.00 18045.17 70.49 0.00 0.00 0.00 0.00 0.00 00:33:56.568 [2024-11-25T12:08:36.471Z] =================================================================================================================== 00:33:56.568 [2024-11-25T12:08:36.471Z] Total : 18045.17 70.49 0.00 0.00 0.00 0.00 0.00 00:33:56.568 00:33:57.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:57.950 Nvme0n1 : 7.00 18061.71 70.55 0.00 0.00 0.00 0.00 0.00 00:33:57.950 [2024-11-25T12:08:37.853Z] =================================================================================================================== 00:33:57.950 [2024-11-25T12:08:37.853Z] Total : 18061.71 70.55 0.00 0.00 0.00 0.00 0.00 00:33:57.950 00:33:58.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:58.891 Nvme0n1 : 8.00 18082.00 70.63 0.00 0.00 0.00 0.00 0.00 00:33:58.891 [2024-11-25T12:08:38.794Z] =================================================================================================================== 00:33:58.891 [2024-11-25T12:08:38.795Z] Total : 18082.00 70.63 0.00 0.00 0.00 0.00 0.00 00:33:58.892 00:33:59.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:59.836 Nvme0n1 : 9.00 18090.78 70.67 0.00 0.00 0.00 0.00 0.00 00:33:59.836 [2024-11-25T12:08:39.739Z] =================================================================================================================== 00:33:59.836 [2024-11-25T12:08:39.739Z] Total : 18090.78 70.67 0.00 0.00 0.00 0.00 0.00 00:33:59.836 00:34:00.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:00.776 Nvme0n1 : 10.00 18097.80 70.69 0.00 0.00 0.00 0.00 0.00 00:34:00.776 [2024-11-25T12:08:40.679Z] =================================================================================================================== 00:34:00.776 [2024-11-25T12:08:40.680Z] Total : 18097.80 70.69 0.00 0.00 0.00 0.00 0.00 00:34:00.777 00:34:00.777 00:34:00.777 Latency(us) 00:34:00.777 [2024-11-25T12:08:40.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:00.777 Nvme0n1 : 10.00 18095.29 70.68 0.00 0.00 7071.09 1665.71 13598.72 00:34:00.777 [2024-11-25T12:08:40.680Z] =================================================================================================================== 00:34:00.777 [2024-11-25T12:08:40.680Z] Total : 18095.29 70.68 0.00 0.00 7071.09 1665.71 13598.72 00:34:00.777 { 00:34:00.777 "results": [ 00:34:00.777 { 00:34:00.777 "job": "Nvme0n1", 00:34:00.777 "core_mask": "0x2", 00:34:00.777 "workload": "randwrite", 00:34:00.777 "status": "finished", 00:34:00.777 "queue_depth": 128, 00:34:00.777 "io_size": 4096, 00:34:00.777 "runtime": 10.004981, 00:34:00.777 "iops": 18095.286737675964, 00:34:00.777 "mibps": 70.68471381904673, 00:34:00.777 "io_failed": 0, 00:34:00.777 "io_timeout": 0, 00:34:00.777 "avg_latency_us": 7071.0931621769405, 00:34:00.777 "min_latency_us": 1665.7066666666667, 00:34:00.777 "max_latency_us": 13598.72 00:34:00.777 } 00:34:00.777 ], 00:34:00.777 "core_count": 1 00:34:00.777 } 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 884707 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 884707 ']' 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 884707 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 884707 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 884707' 00:34:00.777 killing process with pid 884707 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 884707 00:34:00.777 Received shutdown signal, test time was about 10.000000 seconds 00:34:00.777 00:34:00.777 Latency(us) 00:34:00.777 [2024-11-25T12:08:40.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.777 [2024-11-25T12:08:40.680Z] =================================================================================================================== 00:34:00.777 [2024-11-25T12:08:40.680Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 884707 00:34:00.777 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:01.038 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:01.298 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9475b45-ef63-439c-929f-ecc287508cc8 00:34:01.298 13:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 881115 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 881115 00:34:01.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 881115 Killed "${NVMF_APP[@]}" "$@" 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=886934 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 886934 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 886934 ']' 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:01.298 13:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:01.558 [2024-11-25 13:08:41.241317] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:01.558 [2024-11-25 13:08:41.242068] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:34:01.558 [2024-11-25 13:08:41.242103] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.558 [2024-11-25 13:08:41.316125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.558 [2024-11-25 13:08:41.350566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.558 [2024-11-25 13:08:41.350599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.558 [2024-11-25 13:08:41.350607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.558 [2024-11-25 13:08:41.350614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.558 [2024-11-25 13:08:41.350619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.558 [2024-11-25 13:08:41.351189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.558 [2024-11-25 13:08:41.405486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:01.558 [2024-11-25 13:08:41.405738] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:02.129 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.129 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:02.129 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:02.129 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:02.129 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:02.389 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.389 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:02.389 [2024-11-25 13:08:42.213773] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:02.389 [2024-11-25 13:08:42.213887] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:02.389 [2024-11-25 13:08:42.213920] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:02.389 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:34:02.389 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7d79a3bb-ee3e-45cd-b00d-ae202293ca44 00:34:02.389 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7d79a3bb-ee3e-45cd-b00d-ae202293ca44 00:34:02.389 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:02.389 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:02.389 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:02.389 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:02.389 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:02.649 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7d79a3bb-ee3e-45cd-b00d-ae202293ca44 -t 2000 00:34:02.910 [ 00:34:02.910 { 00:34:02.910 "name": "7d79a3bb-ee3e-45cd-b00d-ae202293ca44", 00:34:02.910 "aliases": [ 00:34:02.910 "lvs/lvol" 00:34:02.910 ], 00:34:02.910 "product_name": "Logical Volume", 00:34:02.910 "block_size": 4096, 00:34:02.910 "num_blocks": 38912, 00:34:02.910 "uuid": "7d79a3bb-ee3e-45cd-b00d-ae202293ca44", 00:34:02.910 "assigned_rate_limits": { 00:34:02.910 "rw_ios_per_sec": 0, 00:34:02.910 "rw_mbytes_per_sec": 0, 00:34:02.910 "r_mbytes_per_sec": 0, 00:34:02.910 "w_mbytes_per_sec": 0 00:34:02.910 }, 00:34:02.910 "claimed": false, 00:34:02.910 "zoned": false, 00:34:02.910 "supported_io_types": { 00:34:02.910 "read": true, 00:34:02.910 "write": true, 00:34:02.910 "unmap": true, 00:34:02.910 "flush": false, 00:34:02.910 "reset": true, 00:34:02.910 "nvme_admin": false, 00:34:02.910 "nvme_io": false, 00:34:02.910 "nvme_io_md": false, 00:34:02.910 "write_zeroes": true, 00:34:02.910 "zcopy": false, 00:34:02.910 "get_zone_info": false, 00:34:02.910 "zone_management": false, 00:34:02.910 "zone_append": false, 00:34:02.910 "compare": false, 00:34:02.910 "compare_and_write": false, 00:34:02.910 "abort": false, 00:34:02.910 "seek_hole": true, 00:34:02.910 "seek_data": true, 00:34:02.910 "copy": false, 00:34:02.910 "nvme_iov_md": false 00:34:02.910 }, 00:34:02.910 "driver_specific": { 00:34:02.910 "lvol": { 00:34:02.910 "lvol_store_uuid": "b9475b45-ef63-439c-929f-ecc287508cc8", 00:34:02.910 "base_bdev": "aio_bdev", 00:34:02.910 "thin_provision": false, 00:34:02.910 "num_allocated_clusters": 38, 00:34:02.910 "snapshot": false, 00:34:02.910 "clone": false, 00:34:02.910 "esnap_clone": false 00:34:02.910 } 00:34:02.910 } 00:34:02.910 } 00:34:02.910 ] 00:34:02.910 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:02.910 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:34:02.910 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9475b45-ef63-439c-929f-ecc287508cc8 00:34:02.910 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:34:02.910 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9475b45-ef63-439c-929f-ecc287508cc8 00:34:02.910 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:34:03.170 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:34:03.170 13:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:03.170 [2024-11-25 13:08:43.051561] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9475b45-ef63-439c-929f-ecc287508cc8 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9475b45-ef63-439c-929f-ecc287508cc8 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:03.430 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9475b45-ef63-439c-929f-ecc287508cc8 00:34:03.430 request: 00:34:03.430 { 00:34:03.430 "uuid": "b9475b45-ef63-439c-929f-ecc287508cc8", 00:34:03.430 "method": "bdev_lvol_get_lvstores", 00:34:03.430 "req_id": 1 00:34:03.430 } 00:34:03.431 Got JSON-RPC error response 00:34:03.431 response: 00:34:03.431 { 00:34:03.431 "code": -19, 00:34:03.431 "message": "No such device" 00:34:03.431 } 00:34:03.431 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:34:03.431 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:03.431 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:03.431 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:03.431 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:03.691 aio_bdev 00:34:03.691 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7d79a3bb-ee3e-45cd-b00d-ae202293ca44 00:34:03.691 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7d79a3bb-ee3e-45cd-b00d-ae202293ca44 00:34:03.691 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:03.691 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:03.691 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:03.691 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:03.691 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:03.691 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7d79a3bb-ee3e-45cd-b00d-ae202293ca44 -t 2000 00:34:03.952 [ 00:34:03.952 { 00:34:03.952 "name": "7d79a3bb-ee3e-45cd-b00d-ae202293ca44", 00:34:03.952 "aliases": [ 00:34:03.952 "lvs/lvol" 00:34:03.952 ], 00:34:03.952 "product_name": "Logical Volume", 00:34:03.952 "block_size": 4096, 00:34:03.952 "num_blocks": 38912, 00:34:03.952 "uuid": "7d79a3bb-ee3e-45cd-b00d-ae202293ca44", 00:34:03.952 "assigned_rate_limits": { 00:34:03.952 "rw_ios_per_sec": 0, 00:34:03.952 "rw_mbytes_per_sec": 0, 00:34:03.952 "r_mbytes_per_sec": 0, 00:34:03.952 "w_mbytes_per_sec": 0 00:34:03.952 }, 00:34:03.952 "claimed": false, 00:34:03.952 "zoned": false, 00:34:03.952 "supported_io_types": { 00:34:03.952 "read": true, 00:34:03.952 "write": true, 00:34:03.952 "unmap": true, 00:34:03.952 "flush": false, 00:34:03.952 "reset": true, 00:34:03.952 "nvme_admin": false, 00:34:03.952 "nvme_io": false, 00:34:03.952 "nvme_io_md": false, 00:34:03.952 "write_zeroes": true, 00:34:03.952 "zcopy": false, 00:34:03.952 "get_zone_info": false, 00:34:03.952 "zone_management": false, 00:34:03.952 "zone_append": false, 00:34:03.952 "compare": false, 00:34:03.952 "compare_and_write": false, 00:34:03.952 "abort": false, 00:34:03.952 "seek_hole": true, 00:34:03.952 "seek_data": true, 00:34:03.952 "copy": false, 00:34:03.952 "nvme_iov_md": false 00:34:03.952 }, 00:34:03.952 "driver_specific": { 00:34:03.952 "lvol": { 00:34:03.952 "lvol_store_uuid": "b9475b45-ef63-439c-929f-ecc287508cc8", 00:34:03.952 "base_bdev": "aio_bdev", 00:34:03.952 "thin_provision": false, 00:34:03.952 "num_allocated_clusters": 38, 00:34:03.952 "snapshot": false, 00:34:03.952 "clone": false, 00:34:03.952 "esnap_clone": false 00:34:03.952 } 00:34:03.952 } 00:34:03.952 } 00:34:03.952 ] 00:34:03.952 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:03.952 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9475b45-ef63-439c-929f-ecc287508cc8 00:34:03.952 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:04.212 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:04.212 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9475b45-ef63-439c-929f-ecc287508cc8 00:34:04.212 13:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:04.212 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:04.212 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d79a3bb-ee3e-45cd-b00d-ae202293ca44 00:34:04.471 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9475b45-ef63-439c-929f-ecc287508cc8 00:34:04.731 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:04.732 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:04.992 00:34:04.992 real 0m17.345s 00:34:04.992 user 0m35.319s 00:34:04.992 sys 0m2.860s 00:34:04.992 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.992 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:04.993 ************************************ 00:34:04.993 END TEST lvs_grow_dirty 00:34:04.993 ************************************ 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:04.993 nvmf_trace.0 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:04.993 rmmod nvme_tcp 00:34:04.993 rmmod nvme_fabrics 00:34:04.993 rmmod nvme_keyring 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 886934 ']' 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 886934 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 886934 ']' 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 886934 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 886934 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 886934' 00:34:04.993 killing process with pid 886934 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 886934 00:34:04.993 13:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 886934 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.254 13:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:07.798 00:34:07.798 real 0m45.168s 00:34:07.798 user 0m53.801s 00:34:07.798 sys 0m10.960s 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:07.798 ************************************ 00:34:07.798 END TEST nvmf_lvs_grow 00:34:07.798 ************************************ 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:07.798 ************************************ 00:34:07.798 START TEST nvmf_bdev_io_wait 00:34:07.798 ************************************ 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:07.798 * Looking for test storage... 00:34:07.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:07.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.798 --rc genhtml_branch_coverage=1 00:34:07.798 --rc genhtml_function_coverage=1 00:34:07.798 --rc genhtml_legend=1 00:34:07.798 --rc geninfo_all_blocks=1 00:34:07.798 --rc geninfo_unexecuted_blocks=1 00:34:07.798 00:34:07.798 ' 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:07.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.798 --rc genhtml_branch_coverage=1 00:34:07.798 --rc genhtml_function_coverage=1 00:34:07.798 --rc genhtml_legend=1 00:34:07.798 --rc geninfo_all_blocks=1 00:34:07.798 --rc geninfo_unexecuted_blocks=1 00:34:07.798 00:34:07.798 ' 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:07.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.798 --rc genhtml_branch_coverage=1 00:34:07.798 --rc genhtml_function_coverage=1 00:34:07.798 --rc genhtml_legend=1 00:34:07.798 --rc geninfo_all_blocks=1 00:34:07.798 --rc geninfo_unexecuted_blocks=1 00:34:07.798 00:34:07.798 ' 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:07.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.798 --rc genhtml_branch_coverage=1 00:34:07.798 --rc genhtml_function_coverage=1 00:34:07.798 --rc genhtml_legend=1 00:34:07.798 --rc geninfo_all_blocks=1 00:34:07.798 --rc geninfo_unexecuted_blocks=1 00:34:07.798 00:34:07.798 ' 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.798 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:07.799 13:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:15.964 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.964 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:15.964 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:15.964 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:15.964 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:15.964 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:15.964 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:15.964 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:15.964 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:15.964 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:15.964 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:15.965 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:15.965 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:15.965 Found net devices under 0000:31:00.0: cvl_0_0 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:15.965 Found net devices under 0000:31:00.1: cvl_0_1 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.965 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:15.966 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:16.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:34:16.228 00:34:16.228 --- 10.0.0.2 ping statistics --- 00:34:16.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.228 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:34:16.228 00:34:16.228 --- 10.0.0.1 ping statistics --- 00:34:16.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.228 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=892479 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 892479 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 892479 ']' 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.228 13:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:16.228 [2024-11-25 13:08:56.013149] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:16.228 [2024-11-25 13:08:56.014291] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:34:16.228 [2024-11-25 13:08:56.014344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.228 [2024-11-25 13:08:56.106326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:16.490 [2024-11-25 13:08:56.149170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.490 [2024-11-25 13:08:56.149208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.490 [2024-11-25 13:08:56.149216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.490 [2024-11-25 13:08:56.149223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.490 [2024-11-25 13:08:56.149229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.490 [2024-11-25 13:08:56.150839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.490 [2024-11-25 13:08:56.150982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.490 [2024-11-25 13:08:56.151220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:16.490 [2024-11-25 13:08:56.151222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.490 [2024-11-25 13:08:56.151619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 [2024-11-25 13:08:56.899602] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:17.095 [2024-11-25 13:08:56.899950] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:17.095 [2024-11-25 13:08:56.900770] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:17.095 [2024-11-25 13:08:56.900834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 [2024-11-25 13:08:56.912131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 Malloc0 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:17.095 [2024-11-25 13:08:56.975976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=892696 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=892698 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.095 { 00:34:17.095 "params": { 00:34:17.095 "name": "Nvme$subsystem", 00:34:17.095 "trtype": "$TEST_TRANSPORT", 00:34:17.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.095 "adrfam": "ipv4", 00:34:17.095 "trsvcid": "$NVMF_PORT", 00:34:17.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.095 "hdgst": ${hdgst:-false}, 00:34:17.095 "ddgst": ${ddgst:-false} 00:34:17.095 }, 00:34:17.095 "method": "bdev_nvme_attach_controller" 00:34:17.095 } 00:34:17.095 EOF 00:34:17.095 )") 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=892700 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=892703 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.095 { 00:34:17.095 "params": { 00:34:17.095 "name": "Nvme$subsystem", 00:34:17.095 "trtype": "$TEST_TRANSPORT", 00:34:17.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.095 "adrfam": "ipv4", 00:34:17.095 "trsvcid": "$NVMF_PORT", 00:34:17.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.095 "hdgst": ${hdgst:-false}, 00:34:17.095 "ddgst": ${ddgst:-false} 00:34:17.095 }, 00:34:17.095 "method": "bdev_nvme_attach_controller" 00:34:17.095 } 00:34:17.095 EOF 00:34:17.095 )") 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.095 { 00:34:17.095 "params": { 00:34:17.095 "name": "Nvme$subsystem", 00:34:17.095 "trtype": "$TEST_TRANSPORT", 00:34:17.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.095 "adrfam": "ipv4", 00:34:17.095 "trsvcid": "$NVMF_PORT", 00:34:17.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.095 "hdgst": ${hdgst:-false}, 00:34:17.095 "ddgst": ${ddgst:-false} 00:34:17.095 }, 00:34:17.095 "method": "bdev_nvme_attach_controller" 00:34:17.095 } 00:34:17.095 EOF 00:34:17.095 )") 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:17.095 { 00:34:17.095 "params": { 00:34:17.095 "name": "Nvme$subsystem", 00:34:17.095 "trtype": "$TEST_TRANSPORT", 00:34:17.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:17.095 "adrfam": "ipv4", 00:34:17.095 "trsvcid": "$NVMF_PORT", 00:34:17.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:17.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:17.095 "hdgst": ${hdgst:-false}, 00:34:17.095 "ddgst": ${ddgst:-false} 00:34:17.095 }, 00:34:17.095 "method": "bdev_nvme_attach_controller" 00:34:17.095 } 00:34:17.095 EOF 00:34:17.095 )") 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 892696 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:17.095 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:17.356 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:17.356 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:17.356 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:17.356 "params": { 00:34:17.356 "name": "Nvme1", 00:34:17.356 "trtype": "tcp", 00:34:17.356 "traddr": "10.0.0.2", 00:34:17.356 "adrfam": "ipv4", 00:34:17.356 "trsvcid": "4420", 00:34:17.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.356 "hdgst": false, 00:34:17.356 "ddgst": false 00:34:17.356 }, 00:34:17.356 "method": "bdev_nvme_attach_controller" 00:34:17.356 }' 00:34:17.356 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:17.356 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:17.356 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:17.356 "params": { 00:34:17.356 "name": "Nvme1", 00:34:17.356 "trtype": "tcp", 00:34:17.356 "traddr": "10.0.0.2", 00:34:17.356 "adrfam": "ipv4", 00:34:17.356 "trsvcid": "4420", 00:34:17.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.356 "hdgst": false, 00:34:17.356 "ddgst": false 00:34:17.356 }, 00:34:17.356 "method": "bdev_nvme_attach_controller" 00:34:17.356 }' 00:34:17.356 13:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:17.356 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:17.356 "params": { 00:34:17.356 "name": "Nvme1", 00:34:17.356 "trtype": "tcp", 00:34:17.356 "traddr": "10.0.0.2", 00:34:17.356 "adrfam": "ipv4", 00:34:17.356 "trsvcid": "4420", 00:34:17.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.356 "hdgst": false, 00:34:17.356 "ddgst": false 00:34:17.356 }, 00:34:17.356 "method": "bdev_nvme_attach_controller" 00:34:17.356 }' 00:34:17.356 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:17.356 13:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:17.356 "params": { 00:34:17.356 "name": "Nvme1", 00:34:17.356 "trtype": "tcp", 00:34:17.356 "traddr": "10.0.0.2", 00:34:17.356 "adrfam": "ipv4", 00:34:17.356 "trsvcid": "4420", 00:34:17.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:17.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:17.356 "hdgst": false, 00:34:17.356 "ddgst": false 00:34:17.356 }, 00:34:17.356 "method": "bdev_nvme_attach_controller" 00:34:17.356 }' 00:34:17.356 [2024-11-25 13:08:57.031319] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:34:17.356 [2024-11-25 13:08:57.031372] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:17.356 [2024-11-25 13:08:57.032158] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:34:17.356 [2024-11-25 13:08:57.032214] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:17.356 [2024-11-25 13:08:57.033329] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:34:17.356 [2024-11-25 13:08:57.033375] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:17.356 [2024-11-25 13:08:57.036818] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:34:17.356 [2024-11-25 13:08:57.036869] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:17.356 [2024-11-25 13:08:57.185028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.356 [2024-11-25 13:08:57.213574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:17.356 [2024-11-25 13:08:57.225294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.356 [2024-11-25 13:08:57.253652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:17.616 [2024-11-25 13:08:57.270698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.616 [2024-11-25 13:08:57.299670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:17.616 [2024-11-25 13:08:57.330888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.616 [2024-11-25 13:08:57.360114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:17.616 Running I/O for 1 seconds... 00:34:17.616 Running I/O for 1 seconds... 00:34:17.616 Running I/O for 1 seconds... 00:34:17.875 Running I/O for 1 seconds... 00:34:18.815 20775.00 IOPS, 81.15 MiB/s 00:34:18.815 Latency(us) 00:34:18.815 [2024-11-25T12:08:58.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.815 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:18.815 Nvme1n1 : 1.01 20836.98 81.39 0.00 0.00 6127.90 2484.91 7973.55 00:34:18.815 [2024-11-25T12:08:58.718Z] =================================================================================================================== 00:34:18.815 [2024-11-25T12:08:58.718Z] Total : 20836.98 81.39 0.00 0.00 6127.90 2484.91 7973.55 00:34:18.815 11614.00 IOPS, 45.37 MiB/s 00:34:18.815 Latency(us) 00:34:18.815 [2024-11-25T12:08:58.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.815 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:18.815 Nvme1n1 : 1.01 11670.13 45.59 0.00 0.00 10932.05 4696.75 14636.37 00:34:18.815 [2024-11-25T12:08:58.718Z] =================================================================================================================== 00:34:18.815 [2024-11-25T12:08:58.718Z] Total : 11670.13 45.59 0.00 0.00 10932.05 4696.75 14636.37 00:34:18.815 12021.00 IOPS, 46.96 MiB/s 00:34:18.815 Latency(us) 00:34:18.815 [2024-11-25T12:08:58.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.815 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:18.815 Nvme1n1 : 1.01 12098.40 47.26 0.00 0.00 10545.98 2184.53 17367.04 00:34:18.815 [2024-11-25T12:08:58.718Z] =================================================================================================================== 00:34:18.815 [2024-11-25T12:08:58.718Z] Total : 12098.40 47.26 0.00 0.00 10545.98 2184.53 17367.04 00:34:18.815 182104.00 IOPS, 711.34 MiB/s 00:34:18.815 Latency(us) 00:34:18.815 [2024-11-25T12:08:58.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.815 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:18.815 Nvme1n1 : 1.00 181730.91 709.89 0.00 0.00 700.51 303.79 2034.35 00:34:18.815 [2024-11-25T12:08:58.718Z] =================================================================================================================== 00:34:18.815 [2024-11-25T12:08:58.718Z] Total : 181730.91 709.89 0.00 0.00 700.51 303.79 2034.35 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 892698 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 892700 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 892703 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:18.815 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:18.816 rmmod nvme_tcp 00:34:18.816 rmmod nvme_fabrics 00:34:18.816 rmmod nvme_keyring 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 892479 ']' 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 892479 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 892479 ']' 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 892479 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 892479 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 892479' 00:34:19.075 killing process with pid 892479 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 892479 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 892479 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:19.075 13:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.617 13:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:21.617 00:34:21.617 real 0m13.821s 00:34:21.617 user 0m15.091s 00:34:21.617 sys 0m8.000s 00:34:21.617 13:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:21.617 13:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:21.617 ************************************ 00:34:21.617 END TEST nvmf_bdev_io_wait 00:34:21.617 ************************************ 00:34:21.617 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:21.617 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:21.617 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:21.617 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:21.617 ************************************ 00:34:21.617 START TEST nvmf_queue_depth 00:34:21.617 ************************************ 00:34:21.617 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:21.617 * Looking for test storage... 00:34:21.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:21.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.618 --rc genhtml_branch_coverage=1 00:34:21.618 --rc genhtml_function_coverage=1 00:34:21.618 --rc genhtml_legend=1 00:34:21.618 --rc geninfo_all_blocks=1 00:34:21.618 --rc geninfo_unexecuted_blocks=1 00:34:21.618 00:34:21.618 ' 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:21.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.618 --rc genhtml_branch_coverage=1 00:34:21.618 --rc genhtml_function_coverage=1 00:34:21.618 --rc genhtml_legend=1 00:34:21.618 --rc geninfo_all_blocks=1 00:34:21.618 --rc geninfo_unexecuted_blocks=1 00:34:21.618 00:34:21.618 ' 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:21.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.618 --rc genhtml_branch_coverage=1 00:34:21.618 --rc genhtml_function_coverage=1 00:34:21.618 --rc genhtml_legend=1 00:34:21.618 --rc geninfo_all_blocks=1 00:34:21.618 --rc geninfo_unexecuted_blocks=1 00:34:21.618 00:34:21.618 ' 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:21.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:21.618 --rc genhtml_branch_coverage=1 00:34:21.618 --rc genhtml_function_coverage=1 00:34:21.618 --rc genhtml_legend=1 00:34:21.618 --rc geninfo_all_blocks=1 00:34:21.618 --rc geninfo_unexecuted_blocks=1 00:34:21.618 00:34:21.618 ' 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.618 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:21.619 13:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:29.759 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:29.759 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:29.759 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:29.760 Found net devices under 0000:31:00.0: cvl_0_0 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:29.760 Found net devices under 0000:31:00.1: cvl_0_1 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:29.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:29.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:34:29.760 00:34:29.760 --- 10.0.0.2 ping statistics --- 00:34:29.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.760 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:29.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:29.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:34:29.760 00:34:29.760 --- 10.0.0.1 ping statistics --- 00:34:29.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:29.760 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:29.760 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=898171 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 898171 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 898171 ']' 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:30.021 13:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:30.021 [2024-11-25 13:09:09.732221] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:30.021 [2024-11-25 13:09:09.733617] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:34:30.021 [2024-11-25 13:09:09.733673] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.021 [2024-11-25 13:09:09.840462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.021 [2024-11-25 13:09:09.875777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.021 [2024-11-25 13:09:09.875810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.021 [2024-11-25 13:09:09.875817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.021 [2024-11-25 13:09:09.875824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.021 [2024-11-25 13:09:09.875829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.021 [2024-11-25 13:09:09.876393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.281 [2024-11-25 13:09:09.930574] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:30.281 [2024-11-25 13:09:09.930825] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:30.853 [2024-11-25 13:09:10.569185] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:30.853 Malloc0 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:30.853 [2024-11-25 13:09:10.649245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=898536 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 898536 /var/tmp/bdevperf.sock 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 898536 ']' 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:30.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:30.853 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:30.853 [2024-11-25 13:09:10.686561] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:34:30.853 [2024-11-25 13:09:10.686612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898536 ] 00:34:31.114 [2024-11-25 13:09:10.762684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:31.114 [2024-11-25 13:09:10.798905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.114 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:31.114 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:31.114 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:31.114 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.114 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:31.114 NVMe0n1 00:34:31.114 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.114 13:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:31.374 Running I/O for 10 seconds... 00:34:33.258 8192.00 IOPS, 32.00 MiB/s [2024-11-25T12:09:14.104Z] 8704.00 IOPS, 34.00 MiB/s [2024-11-25T12:09:15.490Z] 8858.67 IOPS, 34.60 MiB/s [2024-11-25T12:09:16.062Z] 8940.00 IOPS, 34.92 MiB/s [2024-11-25T12:09:17.449Z] 9014.00 IOPS, 35.21 MiB/s [2024-11-25T12:09:18.392Z] 9536.17 IOPS, 37.25 MiB/s [2024-11-25T12:09:19.332Z] 9912.14 IOPS, 38.72 MiB/s [2024-11-25T12:09:20.274Z] 10235.00 IOPS, 39.98 MiB/s [2024-11-25T12:09:21.215Z] 10472.78 IOPS, 40.91 MiB/s [2024-11-25T12:09:21.215Z] 10663.40 IOPS, 41.65 MiB/s 00:34:41.312 Latency(us) 00:34:41.312 [2024-11-25T12:09:21.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.312 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:41.312 Verification LBA range: start 0x0 length 0x4000 00:34:41.312 NVMe0n1 : 10.05 10710.52 41.84 0.00 0.00 95287.91 10485.76 80827.73 00:34:41.312 [2024-11-25T12:09:21.215Z] =================================================================================================================== 00:34:41.312 [2024-11-25T12:09:21.215Z] Total : 10710.52 41.84 0.00 0.00 95287.91 10485.76 80827.73 00:34:41.312 { 00:34:41.312 "results": [ 00:34:41.312 { 00:34:41.312 "job": "NVMe0n1", 00:34:41.312 "core_mask": "0x1", 00:34:41.312 "workload": "verify", 00:34:41.312 "status": "finished", 00:34:41.312 "verify_range": { 00:34:41.312 "start": 0, 00:34:41.312 "length": 16384 00:34:41.312 }, 00:34:41.312 "queue_depth": 1024, 00:34:41.312 "io_size": 4096, 00:34:41.312 "runtime": 10.045917, 00:34:41.312 "iops": 10710.520503006346, 00:34:41.312 "mibps": 41.83797071486854, 00:34:41.312 "io_failed": 0, 00:34:41.312 "io_timeout": 0, 00:34:41.312 "avg_latency_us": 95287.91386228241, 00:34:41.312 "min_latency_us": 10485.76, 00:34:41.312 "max_latency_us": 80827.73333333334 00:34:41.312 } 00:34:41.312 ], 00:34:41.312 "core_count": 1 00:34:41.312 } 00:34:41.312 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 898536 00:34:41.312 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 898536 ']' 00:34:41.312 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 898536 00:34:41.312 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:41.312 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.312 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 898536 00:34:41.312 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:41.312 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:41.312 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 898536' 00:34:41.312 killing process with pid 898536 00:34:41.312 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 898536 00:34:41.312 Received shutdown signal, test time was about 10.000000 seconds 00:34:41.312 00:34:41.312 Latency(us) 00:34:41.312 [2024-11-25T12:09:21.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.312 [2024-11-25T12:09:21.215Z] =================================================================================================================== 00:34:41.312 [2024-11-25T12:09:21.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:41.312 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 898536 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:41.573 rmmod nvme_tcp 00:34:41.573 rmmod nvme_fabrics 00:34:41.573 rmmod nvme_keyring 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 898171 ']' 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 898171 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 898171 ']' 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 898171 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 898171 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 898171' 00:34:41.573 killing process with pid 898171 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 898171 00:34:41.573 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 898171 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.834 13:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.748 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:43.748 00:34:43.748 real 0m22.573s 00:34:43.748 user 0m23.519s 00:34:43.748 sys 0m7.756s 00:34:43.748 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:43.748 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:43.748 ************************************ 00:34:43.748 END TEST nvmf_queue_depth 00:34:43.748 ************************************ 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:44.009 ************************************ 00:34:44.009 START TEST nvmf_target_multipath 00:34:44.009 ************************************ 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:44.009 * Looking for test storage... 00:34:44.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:44.009 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:44.010 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.271 --rc genhtml_branch_coverage=1 00:34:44.271 --rc genhtml_function_coverage=1 00:34:44.271 --rc genhtml_legend=1 00:34:44.271 --rc geninfo_all_blocks=1 00:34:44.271 --rc geninfo_unexecuted_blocks=1 00:34:44.271 00:34:44.271 ' 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.271 --rc genhtml_branch_coverage=1 00:34:44.271 --rc genhtml_function_coverage=1 00:34:44.271 --rc genhtml_legend=1 00:34:44.271 --rc geninfo_all_blocks=1 00:34:44.271 --rc geninfo_unexecuted_blocks=1 00:34:44.271 00:34:44.271 ' 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.271 --rc genhtml_branch_coverage=1 00:34:44.271 --rc genhtml_function_coverage=1 00:34:44.271 --rc genhtml_legend=1 00:34:44.271 --rc geninfo_all_blocks=1 00:34:44.271 --rc geninfo_unexecuted_blocks=1 00:34:44.271 00:34:44.271 ' 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:44.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.271 --rc genhtml_branch_coverage=1 00:34:44.271 --rc genhtml_function_coverage=1 00:34:44.271 --rc genhtml_legend=1 00:34:44.271 --rc geninfo_all_blocks=1 00:34:44.271 --rc geninfo_unexecuted_blocks=1 00:34:44.271 00:34:44.271 ' 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.271 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:44.272 13:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:52.416 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:52.416 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:52.416 Found net devices under 0000:31:00.0: cvl_0_0 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:52.416 Found net devices under 0000:31:00.1: cvl_0_1 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:52.416 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:52.417 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:52.417 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:52.417 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:52.417 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:52.417 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:52.417 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:52.417 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:52.417 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:52.417 13:09:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:52.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:52.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:34:52.417 00:34:52.417 --- 10.0.0.2 ping statistics --- 00:34:52.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.417 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:52.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:52.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:34:52.417 00:34:52.417 --- 10.0.0.1 ping statistics --- 00:34:52.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.417 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:52.417 only one NIC for nvmf test 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:52.417 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:52.417 rmmod nvme_tcp 00:34:52.417 rmmod nvme_fabrics 00:34:52.417 rmmod nvme_keyring 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:52.678 13:09:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.593 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:54.593 00:34:54.593 real 0m10.750s 00:34:54.594 user 0m2.391s 00:34:54.594 sys 0m6.289s 00:34:54.594 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:54.594 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:54.594 ************************************ 00:34:54.594 END TEST nvmf_target_multipath 00:34:54.594 ************************************ 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:54.855 ************************************ 00:34:54.855 START TEST nvmf_zcopy 00:34:54.855 ************************************ 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:54.855 * Looking for test storage... 00:34:54.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:54.855 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:55.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.117 --rc genhtml_branch_coverage=1 00:34:55.117 --rc genhtml_function_coverage=1 00:34:55.117 --rc genhtml_legend=1 00:34:55.117 --rc geninfo_all_blocks=1 00:34:55.117 --rc geninfo_unexecuted_blocks=1 00:34:55.117 00:34:55.117 ' 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:55.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.117 --rc genhtml_branch_coverage=1 00:34:55.117 --rc genhtml_function_coverage=1 00:34:55.117 --rc genhtml_legend=1 00:34:55.117 --rc geninfo_all_blocks=1 00:34:55.117 --rc geninfo_unexecuted_blocks=1 00:34:55.117 00:34:55.117 ' 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:55.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.117 --rc genhtml_branch_coverage=1 00:34:55.117 --rc genhtml_function_coverage=1 00:34:55.117 --rc genhtml_legend=1 00:34:55.117 --rc geninfo_all_blocks=1 00:34:55.117 --rc geninfo_unexecuted_blocks=1 00:34:55.117 00:34:55.117 ' 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:55.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.117 --rc genhtml_branch_coverage=1 00:34:55.117 --rc genhtml_function_coverage=1 00:34:55.117 --rc genhtml_legend=1 00:34:55.117 --rc geninfo_all_blocks=1 00:34:55.117 --rc geninfo_unexecuted_blocks=1 00:34:55.117 00:34:55.117 ' 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:55.117 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:55.118 13:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:03.312 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:03.312 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:03.312 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:03.313 Found net devices under 0000:31:00.0: cvl_0_0 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:03.313 Found net devices under 0000:31:00.1: cvl_0_1 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:03.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:03.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:35:03.313 00:35:03.313 --- 10.0.0.2 ping statistics --- 00:35:03.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.313 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:03.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:03.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:35:03.313 00:35:03.313 --- 10.0.0.1 ping statistics --- 00:35:03.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.313 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=909971 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 909971 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 909971 ']' 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.313 13:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:03.313 [2024-11-25 13:09:42.976443] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:03.313 [2024-11-25 13:09:42.977416] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:35:03.313 [2024-11-25 13:09:42.977458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.313 [2024-11-25 13:09:43.076002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.313 [2024-11-25 13:09:43.110452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.313 [2024-11-25 13:09:43.110484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.313 [2024-11-25 13:09:43.110492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.313 [2024-11-25 13:09:43.110499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.313 [2024-11-25 13:09:43.110505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.313 [2024-11-25 13:09:43.111040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.313 [2024-11-25 13:09:43.165017] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:03.313 [2024-11-25 13:09:43.165280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:03.313 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.313 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:35:03.313 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:03.313 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:03.313 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:03.597 [2024-11-25 13:09:43.227766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:03.597 [2024-11-25 13:09:43.256012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:03.597 malloc0 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.597 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:03.598 { 00:35:03.598 "params": { 00:35:03.598 "name": "Nvme$subsystem", 00:35:03.598 "trtype": "$TEST_TRANSPORT", 00:35:03.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.598 "adrfam": "ipv4", 00:35:03.598 "trsvcid": "$NVMF_PORT", 00:35:03.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.598 "hdgst": ${hdgst:-false}, 00:35:03.598 "ddgst": ${ddgst:-false} 00:35:03.598 }, 00:35:03.598 "method": "bdev_nvme_attach_controller" 00:35:03.598 } 00:35:03.598 EOF 00:35:03.598 )") 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:03.598 13:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:03.598 "params": { 00:35:03.598 "name": "Nvme1", 00:35:03.598 "trtype": "tcp", 00:35:03.598 "traddr": "10.0.0.2", 00:35:03.598 "adrfam": "ipv4", 00:35:03.598 "trsvcid": "4420", 00:35:03.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:03.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:03.598 "hdgst": false, 00:35:03.598 "ddgst": false 00:35:03.598 }, 00:35:03.598 "method": "bdev_nvme_attach_controller" 00:35:03.598 }' 00:35:03.598 [2024-11-25 13:09:43.357460] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:35:03.598 [2024-11-25 13:09:43.357512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910027 ] 00:35:03.598 [2024-11-25 13:09:43.434289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.598 [2024-11-25 13:09:43.471067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.229 Running I/O for 10 seconds... 00:35:06.113 6505.00 IOPS, 50.82 MiB/s [2024-11-25T12:09:46.958Z] 6569.50 IOPS, 51.32 MiB/s [2024-11-25T12:09:47.900Z] 6583.00 IOPS, 51.43 MiB/s [2024-11-25T12:09:48.843Z] 6590.50 IOPS, 51.49 MiB/s [2024-11-25T12:09:50.231Z] 6594.60 IOPS, 51.52 MiB/s [2024-11-25T12:09:51.174Z] 6600.83 IOPS, 51.57 MiB/s [2024-11-25T12:09:52.117Z] 6906.71 IOPS, 53.96 MiB/s [2024-11-25T12:09:53.060Z] 7251.12 IOPS, 56.65 MiB/s [2024-11-25T12:09:54.003Z] 7519.00 IOPS, 58.74 MiB/s [2024-11-25T12:09:54.003Z] 7733.00 IOPS, 60.41 MiB/s 00:35:14.100 Latency(us) 00:35:14.100 [2024-11-25T12:09:54.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.100 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:14.100 Verification LBA range: start 0x0 length 0x1000 00:35:14.100 Nvme1n1 : 10.05 7707.55 60.22 0.00 0.00 16497.38 2471.25 43035.31 00:35:14.100 [2024-11-25T12:09:54.003Z] =================================================================================================================== 00:35:14.100 [2024-11-25T12:09:54.003Z] Total : 7707.55 60.22 0.00 0.00 16497.38 2471.25 43035.31 00:35:14.100 13:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=912033 00:35:14.100 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:14.100 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:14.100 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:14.100 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:14.100 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:14.100 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:14.100 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:14.100 { 00:35:14.100 "params": { 00:35:14.100 "name": "Nvme$subsystem", 00:35:14.100 "trtype": "$TEST_TRANSPORT", 00:35:14.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:14.100 "adrfam": "ipv4", 00:35:14.100 "trsvcid": "$NVMF_PORT", 00:35:14.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:14.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:14.100 "hdgst": ${hdgst:-false}, 00:35:14.100 "ddgst": ${ddgst:-false} 00:35:14.100 }, 00:35:14.100 "method": "bdev_nvme_attach_controller" 00:35:14.100 } 00:35:14.100 EOF 00:35:14.100 )") 00:35:14.100 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:14.362 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:14.362 [2024-11-25 13:09:54.007366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.007393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:14.362 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:14.362 13:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:14.362 "params": { 00:35:14.362 "name": "Nvme1", 00:35:14.362 "trtype": "tcp", 00:35:14.362 "traddr": "10.0.0.2", 00:35:14.362 "adrfam": "ipv4", 00:35:14.362 "trsvcid": "4420", 00:35:14.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:14.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:14.362 "hdgst": false, 00:35:14.362 "ddgst": false 00:35:14.362 }, 00:35:14.362 "method": "bdev_nvme_attach_controller" 00:35:14.362 }' 00:35:14.362 [2024-11-25 13:09:54.019336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.019345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.031334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.031342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.043334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.043342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.055334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.055341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.063247] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:35:14.362 [2024-11-25 13:09:54.063297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912033 ] 00:35:14.362 [2024-11-25 13:09:54.067333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.067342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.079334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.079341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.091333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.091341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.103335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.103344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.115333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.115341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.127334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.127341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.139333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.139341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.139543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.362 [2024-11-25 13:09:54.151334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.151342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.163334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.163343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.175094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.362 [2024-11-25 13:09:54.175334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.175342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.187338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.187347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.199339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.199352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.211336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.211348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.223336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.223344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.235334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.235342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.247374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.247390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.362 [2024-11-25 13:09:54.259336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.362 [2024-11-25 13:09:54.259347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.623 [2024-11-25 13:09:54.271337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.623 [2024-11-25 13:09:54.271348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.623 [2024-11-25 13:09:54.283334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.623 [2024-11-25 13:09:54.283342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.623 [2024-11-25 13:09:54.295333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.623 [2024-11-25 13:09:54.295341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.623 [2024-11-25 13:09:54.307333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.623 [2024-11-25 13:09:54.307341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.319334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.319345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.331336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.331348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.343339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.343354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.355336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.355348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 Running I/O for 5 seconds... 00:35:14.624 [2024-11-25 13:09:54.369929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.369945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.383119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.383136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.396122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.396137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.410555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.410570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.423740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.423755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.438286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.438301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.451337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.451352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.464841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.464856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.478643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.478658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.491469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.491484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.504314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.504329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.624 [2024-11-25 13:09:54.518834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.624 [2024-11-25 13:09:54.518849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.531835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.531850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.546630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.546646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.559678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.559693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.574589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.574605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.587636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.587651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.601997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.602021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.615103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.615118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.627589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.627604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.640295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.640310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.654666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.654681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.667944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.667960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.682461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.682476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.695511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.695527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.708038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.708054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.722215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.722231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.735235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.735251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.747899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.747915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.762517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.762537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:14.885 [2024-11-25 13:09:54.775553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:14.885 [2024-11-25 13:09:54.775569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.788462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.788478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.802258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.802273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.815057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.815073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.828436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.828451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.842478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.842494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.855352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.855368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.868007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.868022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.881931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.881947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.894917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.894932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.908459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.908475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.922546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.922561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.935521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.935537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.146 [2024-11-25 13:09:54.948447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.146 [2024-11-25 13:09:54.948462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.147 [2024-11-25 13:09:54.962825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.147 [2024-11-25 13:09:54.962840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.147 [2024-11-25 13:09:54.975899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.147 [2024-11-25 13:09:54.975915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.147 [2024-11-25 13:09:54.990866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.147 [2024-11-25 13:09:54.990881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.147 [2024-11-25 13:09:55.003925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.147 [2024-11-25 13:09:55.003940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.147 [2024-11-25 13:09:55.019048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.147 [2024-11-25 13:09:55.019067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.147 [2024-11-25 13:09:55.032422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.147 [2024-11-25 13:09:55.032437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.147 [2024-11-25 13:09:55.046565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.147 [2024-11-25 13:09:55.046580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.407 [2024-11-25 13:09:55.059249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.407 [2024-11-25 13:09:55.059265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.407 [2024-11-25 13:09:55.072358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.407 [2024-11-25 13:09:55.072374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.407 [2024-11-25 13:09:55.087149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.407 [2024-11-25 13:09:55.087165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.407 [2024-11-25 13:09:55.100406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.100422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.114044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.114059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.127006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.127022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.140323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.140339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.154518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.154533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.167642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.167657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.182520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.182535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.195477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.195493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.208316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.208331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.222717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.222733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.235148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.235164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.248220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.248235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.262246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.262262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.275290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.275313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.288774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.288790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.408 [2024-11-25 13:09:55.302457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.408 [2024-11-25 13:09:55.302473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.315267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.315284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.328403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.328418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.342681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.342698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.355799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.355814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 19070.00 IOPS, 148.98 MiB/s [2024-11-25T12:09:55.571Z] [2024-11-25 13:09:55.370373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.370390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.383516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.383532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.396015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.396031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.410023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.410038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.423077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.423093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.436614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.436630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.450601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.450616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.463446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.463462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.476072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.476087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.490647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.490662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.503751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.503766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.518737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.668 [2024-11-25 13:09:55.518752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.668 [2024-11-25 13:09:55.531490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.669 [2024-11-25 13:09:55.531506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.669 [2024-11-25 13:09:55.544740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.669 [2024-11-25 13:09:55.544755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.669 [2024-11-25 13:09:55.559052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.669 [2024-11-25 13:09:55.559067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.572062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.572077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.586120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.586135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.599152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.599168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.611742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.611757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.626667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.626682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.639740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.639754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.654427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.654442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.667349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.667364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.680871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.680886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.694156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.694172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.706999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.707014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.720264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.720279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.734440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.734456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.747125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.747140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.760292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.760308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.774414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.774430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.787419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.787434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.800372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.800387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.814549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.814564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:15.930 [2024-11-25 13:09:55.827308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:15.930 [2024-11-25 13:09:55.827323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.191 [2024-11-25 13:09:55.840190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.191 [2024-11-25 13:09:55.840205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.191 [2024-11-25 13:09:55.854548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.191 [2024-11-25 13:09:55.854564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.191 [2024-11-25 13:09:55.867721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.191 [2024-11-25 13:09:55.867736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.191 [2024-11-25 13:09:55.882149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.191 [2024-11-25 13:09:55.882165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.191 [2024-11-25 13:09:55.895227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.191 [2024-11-25 13:09:55.895244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.191 [2024-11-25 13:09:55.908124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.191 [2024-11-25 13:09:55.908140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.191 [2024-11-25 13:09:55.922589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.191 [2024-11-25 13:09:55.922604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.191 [2024-11-25 13:09:55.935986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.191 [2024-11-25 13:09:55.936001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.192 [2024-11-25 13:09:55.949966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.192 [2024-11-25 13:09:55.949982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.192 [2024-11-25 13:09:55.963073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.192 [2024-11-25 13:09:55.963089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.192 [2024-11-25 13:09:55.975813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.192 [2024-11-25 13:09:55.975828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.192 [2024-11-25 13:09:55.990904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.192 [2024-11-25 13:09:55.990920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.192 [2024-11-25 13:09:56.004001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.192 [2024-11-25 13:09:56.004016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.192 [2024-11-25 13:09:56.018262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.192 [2024-11-25 13:09:56.018278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.192 [2024-11-25 13:09:56.031211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.192 [2024-11-25 13:09:56.031226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.192 [2024-11-25 13:09:56.044700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.192 [2024-11-25 13:09:56.044715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.192 [2024-11-25 13:09:56.058161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.192 [2024-11-25 13:09:56.058176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.192 [2024-11-25 13:09:56.070634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.192 [2024-11-25 13:09:56.070649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.192 [2024-11-25 13:09:56.084316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.192 [2024-11-25 13:09:56.084331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.452 [2024-11-25 13:09:56.099039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.452 [2024-11-25 13:09:56.099056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.452 [2024-11-25 13:09:56.112011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.452 [2024-11-25 13:09:56.112025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.126496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.126512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.139458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.139473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.152379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.152393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.166350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.166365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.179352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.179367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.192794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.192809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.206617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.206632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.219444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.219459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.232312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.232327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.246821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.246837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.259743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.259758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.274110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.274125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.287038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.287053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.300599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.300614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.314636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.314651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.327666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.327681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.453 [2024-11-25 13:09:56.342334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.453 [2024-11-25 13:09:56.342349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.355453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.355469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 19134.50 IOPS, 149.49 MiB/s [2024-11-25T12:09:56.617Z] [2024-11-25 13:09:56.367973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.367989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.382331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.382347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.395345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.395360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.408573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.408587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.422448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.422463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.435910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.435925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.450200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.450215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.463050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.463067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.475904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.475919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.490209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.490225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.503128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.503144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.516063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.516079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.530117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.530133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.543219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.543239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.555579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.555595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.568421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.568436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.582908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.582925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.595891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.595906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.714 [2024-11-25 13:09:56.610508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.714 [2024-11-25 13:09:56.610524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.975 [2024-11-25 13:09:56.623711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.975 [2024-11-25 13:09:56.623727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.975 [2024-11-25 13:09:56.638164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.975 [2024-11-25 13:09:56.638180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.975 [2024-11-25 13:09:56.651215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.975 [2024-11-25 13:09:56.651231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.975 [2024-11-25 13:09:56.663817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.975 [2024-11-25 13:09:56.663832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.975 [2024-11-25 13:09:56.678766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.975 [2024-11-25 13:09:56.678781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.975 [2024-11-25 13:09:56.691490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.975 [2024-11-25 13:09:56.691507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.975 [2024-11-25 13:09:56.704446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.975 [2024-11-25 13:09:56.704462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.975 [2024-11-25 13:09:56.718924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.975 [2024-11-25 13:09:56.718939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.975 [2024-11-25 13:09:56.732165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.975 [2024-11-25 13:09:56.732180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.975 [2024-11-25 13:09:56.746237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.975 [2024-11-25 13:09:56.746253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.975 [2024-11-25 13:09:56.759086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.976 [2024-11-25 13:09:56.759101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.976 [2024-11-25 13:09:56.772434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.976 [2024-11-25 13:09:56.772450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.976 [2024-11-25 13:09:56.787090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.976 [2024-11-25 13:09:56.787106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.976 [2024-11-25 13:09:56.800119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.976 [2024-11-25 13:09:56.800138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.976 [2024-11-25 13:09:56.814790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.976 [2024-11-25 13:09:56.814806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.976 [2024-11-25 13:09:56.827538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.976 [2024-11-25 13:09:56.827554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.976 [2024-11-25 13:09:56.840223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.976 [2024-11-25 13:09:56.840238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.976 [2024-11-25 13:09:56.855124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.976 [2024-11-25 13:09:56.855140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:16.976 [2024-11-25 13:09:56.867945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:16.976 [2024-11-25 13:09:56.867961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:56.883033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:56.883050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:56.895687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:56.895702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:56.910286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:56.910302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:56.923559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:56.923574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:56.936282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:56.936298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:56.950831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:56.950846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:56.963893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:56.963909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:56.978314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:56.978329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:56.991081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:56.991097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:57.004381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:57.004396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:57.018953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:57.018968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:57.031992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:57.032008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:57.046621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:57.046636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:57.059753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:57.059773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:57.075012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:57.075028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:57.088393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:57.088409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:57.102159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:57.102176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:57.115379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:57.115395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.237 [2024-11-25 13:09:57.128577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.237 [2024-11-25 13:09:57.128593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.142764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.142780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.155767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.155782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.170650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.170666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.184027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.184042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.199263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.199279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.212103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.212119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.226289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.226305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.239837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.239852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.254699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.254715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.267734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.267749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.282220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.282234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.294988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.295003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.307613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.307628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.322116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.322131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.335468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.335482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.347936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.347950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.362585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.362601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 19125.67 IOPS, 149.42 MiB/s [2024-11-25T12:09:57.404Z] [2024-11-25 13:09:57.375634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.375648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.501 [2024-11-25 13:09:57.390765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.501 [2024-11-25 13:09:57.390780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.403627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.403643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.418366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.418382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.431322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.431337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.443814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.443828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.458345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.458360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.471019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.471034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.484297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.484312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.498390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.498406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.511567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.511582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.524315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.524331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.538086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.538102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.551149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.551164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.564132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.564147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.578075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.578090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.590843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.590859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.604283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.604299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.618328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.618344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.631386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.631401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.644098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.644112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:17.761 [2024-11-25 13:09:57.658677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:17.761 [2024-11-25 13:09:57.658692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.672103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.672118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.686367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.686382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.699431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.699446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.712084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.712099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.726352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.726367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.739278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.739294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.752330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.752345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.766547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.766562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.779629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.779643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.794773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.794788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.807899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.807914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.822316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.822331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.835356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.835371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.848290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.848305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.862712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.862727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.876163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.876177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.890277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.890293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.903168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.903183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.022 [2024-11-25 13:09:57.916118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.022 [2024-11-25 13:09:57.916133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.282 [2024-11-25 13:09:57.930192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.282 [2024-11-25 13:09:57.930208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.282 [2024-11-25 13:09:57.943227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.282 [2024-11-25 13:09:57.943242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.282 [2024-11-25 13:09:57.955773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.282 [2024-11-25 13:09:57.955788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.282 [2024-11-25 13:09:57.969971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.282 [2024-11-25 13:09:57.969985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.282 [2024-11-25 13:09:57.982768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.282 [2024-11-25 13:09:57.982782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.282 [2024-11-25 13:09:57.995672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.282 [2024-11-25 13:09:57.995686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.282 [2024-11-25 13:09:58.010159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.282 [2024-11-25 13:09:58.010174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.282 [2024-11-25 13:09:58.023066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.282 [2024-11-25 13:09:58.023081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.282 [2024-11-25 13:09:58.036248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.036263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.283 [2024-11-25 13:09:58.050623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.050637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.283 [2024-11-25 13:09:58.063488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.063503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.283 [2024-11-25 13:09:58.076006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.076024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.283 [2024-11-25 13:09:58.090304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.090320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.283 [2024-11-25 13:09:58.103490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.103506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.283 [2024-11-25 13:09:58.116467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.116482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.283 [2024-11-25 13:09:58.130979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.130995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.283 [2024-11-25 13:09:58.144206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.144222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.283 [2024-11-25 13:09:58.158489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.158504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.283 [2024-11-25 13:09:58.171544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.171560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.283 [2024-11-25 13:09:58.184633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.283 [2024-11-25 13:09:58.184648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.198760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.198775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.211530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.211545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.224328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.224343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.238552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.238568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.251722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.251737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.266124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.266139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.279105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.279121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.292206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.292221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.306489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.306505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.319586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.319602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.332432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.332452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.346741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.346756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 [2024-11-25 13:09:58.359677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.543 [2024-11-25 13:09:58.359692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.543 19143.75 IOPS, 149.56 MiB/s [2024-11-25T12:09:58.446Z] [2024-11-25 13:09:58.374194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.544 [2024-11-25 13:09:58.374210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.544 [2024-11-25 13:09:58.387396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.544 [2024-11-25 13:09:58.387412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.544 [2024-11-25 13:09:58.400741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.544 [2024-11-25 13:09:58.400757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.544 [2024-11-25 13:09:58.414486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.544 [2024-11-25 13:09:58.414502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.544 [2024-11-25 13:09:58.427316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.544 [2024-11-25 13:09:58.427332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.544 [2024-11-25 13:09:58.439770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.544 [2024-11-25 13:09:58.439785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.454755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.454771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.467951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.467966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.482302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.482317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.495783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.495798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.510764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.510780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.524046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.524061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.538462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.538478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.552073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.552088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.566468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.566484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.579579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.579595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.592913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.592933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.607119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.607134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.619973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.619988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.634398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.634413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.647506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.647521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.660462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.660478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.674449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.674464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.687408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.687424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:18.804 [2024-11-25 13:09:58.700738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:18.804 [2024-11-25 13:09:58.700753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.715002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.715018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.727813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.727828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.742793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.742808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.756120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.756135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.770896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.770912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.783509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.783524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.795967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.795982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.810325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.810340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.823218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.823233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.836260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.836274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.850410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.850425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.065 [2024-11-25 13:09:58.863069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.065 [2024-11-25 13:09:58.863085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.066 [2024-11-25 13:09:58.875683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.066 [2024-11-25 13:09:58.875698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.066 [2024-11-25 13:09:58.890380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.066 [2024-11-25 13:09:58.890395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.066 [2024-11-25 13:09:58.903130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.066 [2024-11-25 13:09:58.903144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.066 [2024-11-25 13:09:58.915951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.066 [2024-11-25 13:09:58.915966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.066 [2024-11-25 13:09:58.931038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.066 [2024-11-25 13:09:58.931053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.066 [2024-11-25 13:09:58.944377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.066 [2024-11-25 13:09:58.944392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.066 [2024-11-25 13:09:58.958791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.066 [2024-11-25 13:09:58.958806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:58.971829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:58.971845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:58.986325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:58.986339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:58.999602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:58.999617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.012474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.012488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.026421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.026436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.039505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.039521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.052408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.052423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.066308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.066323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.079439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.079453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.092637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.092652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.106734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.106749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.119847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.119865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.134384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.134399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.147544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.147559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.160494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.160509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.174643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.174658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.188078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.188093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.202528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.202543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.326 [2024-11-25 13:09:59.215913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.326 [2024-11-25 13:09:59.215927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.230441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.230456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.243300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.243314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.256215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.256229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.270458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.270473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.283789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.283803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.298598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.298613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.311263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.311279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.324063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.324078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.338660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.338675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.351768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.351782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.366067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.366081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 19119.00 IOPS, 149.37 MiB/s [2024-11-25T12:09:59.490Z] [2024-11-25 13:09:59.375341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.375355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 00:35:19.587 Latency(us) 00:35:19.587 [2024-11-25T12:09:59.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.587 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:19.587 Nvme1n1 : 5.01 19122.79 149.40 0.00 0.00 6687.85 2512.21 12288.00 00:35:19.587 [2024-11-25T12:09:59.490Z] =================================================================================================================== 00:35:19.587 [2024-11-25T12:09:59.490Z] Total : 19122.79 149.40 0.00 0.00 6687.85 2512.21 12288.00 00:35:19.587 [2024-11-25 13:09:59.387346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.387361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.399343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.399354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.411341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.411352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.423338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.423348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.435335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.435345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.447334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.587 [2024-11-25 13:09:59.447343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.587 [2024-11-25 13:09:59.459334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.588 [2024-11-25 13:09:59.459343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.588 [2024-11-25 13:09:59.471336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.588 [2024-11-25 13:09:59.471347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.588 [2024-11-25 13:09:59.483333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:19.588 [2024-11-25 13:09:59.483341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (912033) - No such process 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 912033 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:19.848 delay0 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.848 13:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:19.849 [2024-11-25 13:09:59.630484] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:27.988 Initializing NVMe Controllers 00:35:27.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:27.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:27.988 Initialization complete. Launching workers. 00:35:27.988 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3808 00:35:27.988 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4088, failed to submit 40 00:35:27.988 success 3936, unsuccessful 152, failed 0 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:27.988 rmmod nvme_tcp 00:35:27.988 rmmod nvme_fabrics 00:35:27.988 rmmod nvme_keyring 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 909971 ']' 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 909971 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 909971 ']' 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 909971 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 909971 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 909971' 00:35:27.988 killing process with pid 909971 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 909971 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 909971 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:27.988 13:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.931 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:28.931 00:35:28.931 real 0m34.185s 00:35:28.931 user 0m44.262s 00:35:28.931 sys 0m12.132s 00:35:28.931 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.931 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:28.931 ************************************ 00:35:28.931 END TEST nvmf_zcopy 00:35:28.931 ************************************ 00:35:28.931 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:28.931 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:28.931 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:28.931 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:28.931 ************************************ 00:35:28.931 START TEST nvmf_nmic 00:35:28.931 ************************************ 00:35:28.931 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:29.193 * Looking for test storage... 00:35:29.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:29.193 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:29.193 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:35:29.193 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:29.193 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:29.193 13:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:29.193 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.194 --rc genhtml_branch_coverage=1 00:35:29.194 --rc genhtml_function_coverage=1 00:35:29.194 --rc genhtml_legend=1 00:35:29.194 --rc geninfo_all_blocks=1 00:35:29.194 --rc geninfo_unexecuted_blocks=1 00:35:29.194 00:35:29.194 ' 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.194 --rc genhtml_branch_coverage=1 00:35:29.194 --rc genhtml_function_coverage=1 00:35:29.194 --rc genhtml_legend=1 00:35:29.194 --rc geninfo_all_blocks=1 00:35:29.194 --rc geninfo_unexecuted_blocks=1 00:35:29.194 00:35:29.194 ' 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.194 --rc genhtml_branch_coverage=1 00:35:29.194 --rc genhtml_function_coverage=1 00:35:29.194 --rc genhtml_legend=1 00:35:29.194 --rc geninfo_all_blocks=1 00:35:29.194 --rc geninfo_unexecuted_blocks=1 00:35:29.194 00:35:29.194 ' 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.194 --rc genhtml_branch_coverage=1 00:35:29.194 --rc genhtml_function_coverage=1 00:35:29.194 --rc genhtml_legend=1 00:35:29.194 --rc geninfo_all_blocks=1 00:35:29.194 --rc geninfo_unexecuted_blocks=1 00:35:29.194 00:35:29.194 ' 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:29.194 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:29.195 13:10:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:37.338 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:37.339 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:37.339 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:37.339 Found net devices under 0000:31:00.0: cvl_0_0 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:37.339 Found net devices under 0000:31:00.1: cvl_0_1 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:37.339 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:37.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:37.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:35:37.600 00:35:37.600 --- 10.0.0.2 ping statistics --- 00:35:37.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.600 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:37.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:37.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:35:37.600 00:35:37.600 --- 10.0.0.1 ping statistics --- 00:35:37.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.600 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=919052 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 919052 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 919052 ']' 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:37.600 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:37.861 [2024-11-25 13:10:17.512913] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:37.861 [2024-11-25 13:10:17.513900] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:35:37.861 [2024-11-25 13:10:17.513936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.861 [2024-11-25 13:10:17.601007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:37.861 [2024-11-25 13:10:17.637988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:37.861 [2024-11-25 13:10:17.638021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:37.861 [2024-11-25 13:10:17.638029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:37.861 [2024-11-25 13:10:17.638036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:37.861 [2024-11-25 13:10:17.638042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:37.861 [2024-11-25 13:10:17.639471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.861 [2024-11-25 13:10:17.639554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:37.861 [2024-11-25 13:10:17.639709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.861 [2024-11-25 13:10:17.639709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:37.861 [2024-11-25 13:10:17.694495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:37.861 [2024-11-25 13:10:17.694571] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:37.861 [2024-11-25 13:10:17.694948] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:37.861 [2024-11-25 13:10:17.695595] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:37.861 [2024-11-25 13:10:17.695631] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:37.861 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:37.861 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:35:37.861 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:37.861 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:37.861 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:37.861 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.861 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:37.861 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.861 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:38.123 [2024-11-25 13:10:17.764474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:38.123 Malloc0 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:38.123 [2024-11-25 13:10:17.844367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:38.123 test case1: single bdev can't be used in multiple subsystems 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.123 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:38.124 [2024-11-25 13:10:17.880096] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:38.124 [2024-11-25 13:10:17.880115] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:38.124 [2024-11-25 13:10:17.880123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:38.124 request: 00:35:38.124 { 00:35:38.124 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:38.124 "namespace": { 00:35:38.124 "bdev_name": "Malloc0", 00:35:38.124 "no_auto_visible": false 00:35:38.124 }, 00:35:38.124 "method": "nvmf_subsystem_add_ns", 00:35:38.124 "req_id": 1 00:35:38.124 } 00:35:38.124 Got JSON-RPC error response 00:35:38.124 response: 00:35:38.124 { 00:35:38.124 "code": -32602, 00:35:38.124 "message": "Invalid parameters" 00:35:38.124 } 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:38.124 Adding namespace failed - expected result. 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:38.124 test case2: host connect to nvmf target in multiple paths 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:38.124 [2024-11-25 13:10:17.892209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.124 13:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:38.386 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:38.958 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:38.958 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:35:38.958 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:38.958 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:38.958 13:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:35:40.870 13:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:40.870 13:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:40.870 13:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:40.870 13:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:40.870 13:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:40.870 13:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:35:40.870 13:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:40.870 [global] 00:35:40.870 thread=1 00:35:40.870 invalidate=1 00:35:40.870 rw=write 00:35:40.870 time_based=1 00:35:40.870 runtime=1 00:35:40.870 ioengine=libaio 00:35:40.870 direct=1 00:35:40.870 bs=4096 00:35:40.870 iodepth=1 00:35:40.870 norandommap=0 00:35:40.870 numjobs=1 00:35:40.870 00:35:40.870 verify_dump=1 00:35:40.870 verify_backlog=512 00:35:40.870 verify_state_save=0 00:35:40.870 do_verify=1 00:35:40.870 verify=crc32c-intel 00:35:40.870 [job0] 00:35:40.870 filename=/dev/nvme0n1 00:35:40.870 Could not set queue depth (nvme0n1) 00:35:41.457 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:41.457 fio-3.35 00:35:41.457 Starting 1 thread 00:35:42.401 00:35:42.401 job0: (groupid=0, jobs=1): err= 0: pid=919924: Mon Nov 25 13:10:22 2024 00:35:42.401 read: IOPS=653, BW=2613KiB/s (2676kB/s)(2616KiB/1001msec) 00:35:42.401 slat (nsec): min=7103, max=56541, avg=23651.68, stdev=7208.17 00:35:42.401 clat (usec): min=384, max=1057, avg=749.61, stdev=92.70 00:35:42.401 lat (usec): min=400, max=1083, avg=773.27, stdev=95.02 00:35:42.401 clat percentiles (usec): 00:35:42.401 | 1.00th=[ 482], 5.00th=[ 586], 10.00th=[ 619], 20.00th=[ 685], 00:35:42.401 | 30.00th=[ 709], 40.00th=[ 734], 50.00th=[ 775], 60.00th=[ 799], 00:35:42.401 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 865], 00:35:42.401 | 99.00th=[ 906], 99.50th=[ 938], 99.90th=[ 1057], 99.95th=[ 1057], 00:35:42.401 | 99.99th=[ 1057] 00:35:42.401 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:35:42.401 slat (usec): min=9, max=31533, avg=61.11, stdev=984.52 00:35:42.401 clat (usec): min=119, max=1143, avg=409.91, stdev=113.21 00:35:42.401 lat (usec): min=130, max=31988, avg=471.02, stdev=992.75 00:35:42.401 clat percentiles (usec): 00:35:42.401 | 1.00th=[ 128], 5.00th=[ 225], 10.00th=[ 255], 20.00th=[ 322], 00:35:42.401 | 30.00th=[ 343], 40.00th=[ 367], 50.00th=[ 412], 60.00th=[ 437], 00:35:42.401 | 70.00th=[ 465], 80.00th=[ 515], 90.00th=[ 553], 95.00th=[ 586], 00:35:42.401 | 99.00th=[ 635], 99.50th=[ 644], 99.90th=[ 832], 99.95th=[ 1139], 00:35:42.401 | 99.99th=[ 1139] 00:35:42.401 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:42.401 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:42.401 lat (usec) : 250=5.66%, 500=40.58%, 750=32.18%, 1000=21.39% 00:35:42.401 lat (msec) : 2=0.18% 00:35:42.401 cpu : usr=3.00%, sys=4.20%, ctx=1681, majf=0, minf=1 00:35:42.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.401 issued rwts: total=654,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.401 00:35:42.401 Run status group 0 (all jobs): 00:35:42.401 READ: bw=2613KiB/s (2676kB/s), 2613KiB/s-2613KiB/s (2676kB/s-2676kB/s), io=2616KiB (2679kB), run=1001-1001msec 00:35:42.401 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:35:42.402 00:35:42.402 Disk stats (read/write): 00:35:42.402 nvme0n1: ios=549/1024, merge=0/0, ticks=1357/423, in_queue=1780, util=98.90% 00:35:42.402 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:42.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:42.663 rmmod nvme_tcp 00:35:42.663 rmmod nvme_fabrics 00:35:42.663 rmmod nvme_keyring 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 919052 ']' 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 919052 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 919052 ']' 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 919052 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 919052 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 919052' 00:35:42.663 killing process with pid 919052 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 919052 00:35:42.663 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 919052 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:42.924 13:10:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:44.895 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:44.895 00:35:44.895 real 0m15.928s 00:35:44.895 user 0m33.154s 00:35:44.895 sys 0m7.946s 00:35:44.895 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:44.895 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:44.895 ************************************ 00:35:44.895 END TEST nvmf_nmic 00:35:44.895 ************************************ 00:35:44.895 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:44.895 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:44.895 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:44.895 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:45.157 ************************************ 00:35:45.157 START TEST nvmf_fio_target 00:35:45.157 ************************************ 00:35:45.157 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:45.157 * Looking for test storage... 00:35:45.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:45.157 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:45.157 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:45.157 13:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:45.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.157 --rc genhtml_branch_coverage=1 00:35:45.157 --rc genhtml_function_coverage=1 00:35:45.157 --rc genhtml_legend=1 00:35:45.157 --rc geninfo_all_blocks=1 00:35:45.157 --rc geninfo_unexecuted_blocks=1 00:35:45.157 00:35:45.157 ' 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:45.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.157 --rc genhtml_branch_coverage=1 00:35:45.157 --rc genhtml_function_coverage=1 00:35:45.157 --rc genhtml_legend=1 00:35:45.157 --rc geninfo_all_blocks=1 00:35:45.157 --rc geninfo_unexecuted_blocks=1 00:35:45.157 00:35:45.157 ' 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:45.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.157 --rc genhtml_branch_coverage=1 00:35:45.157 --rc genhtml_function_coverage=1 00:35:45.157 --rc genhtml_legend=1 00:35:45.157 --rc geninfo_all_blocks=1 00:35:45.157 --rc geninfo_unexecuted_blocks=1 00:35:45.157 00:35:45.157 ' 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:45.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.157 --rc genhtml_branch_coverage=1 00:35:45.157 --rc genhtml_function_coverage=1 00:35:45.157 --rc genhtml_legend=1 00:35:45.157 --rc geninfo_all_blocks=1 00:35:45.157 --rc geninfo_unexecuted_blocks=1 00:35:45.157 00:35:45.157 ' 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.157 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:45.158 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:45.419 13:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:53.579 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:53.579 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:53.579 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:53.580 Found net devices under 0000:31:00.0: cvl_0_0 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:53.580 Found net devices under 0000:31:00.1: cvl_0_1 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:53.580 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:53.840 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:53.840 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:53.840 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:53.840 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:53.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:53.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:35:53.840 00:35:53.840 --- 10.0.0.2 ping statistics --- 00:35:53.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.840 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:53.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:53.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:35:53.841 00:35:53.841 --- 10.0.0.1 ping statistics --- 00:35:53.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:53.841 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=924941 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 924941 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 924941 ']' 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:53.841 13:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:53.841 [2024-11-25 13:10:33.644494] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:53.841 [2024-11-25 13:10:33.645633] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:35:53.841 [2024-11-25 13:10:33.645687] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:53.841 [2024-11-25 13:10:33.736848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:54.101 [2024-11-25 13:10:33.778092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:54.101 [2024-11-25 13:10:33.778127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:54.101 [2024-11-25 13:10:33.778135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:54.101 [2024-11-25 13:10:33.778143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:54.101 [2024-11-25 13:10:33.778149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:54.101 [2024-11-25 13:10:33.779910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.101 [2024-11-25 13:10:33.780120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.101 [2024-11-25 13:10:33.780121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:54.101 [2024-11-25 13:10:33.779973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:54.101 [2024-11-25 13:10:33.835890] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:54.101 [2024-11-25 13:10:33.835956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:54.101 [2024-11-25 13:10:33.836954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:54.101 [2024-11-25 13:10:33.837903] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:54.101 [2024-11-25 13:10:33.837983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:54.671 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.671 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:54.671 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:54.671 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:54.671 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:54.671 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:54.671 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:54.931 [2024-11-25 13:10:34.596569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:54.932 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:54.932 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:54.932 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:55.191 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:55.191 13:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:55.452 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:55.452 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:55.452 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:55.452 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:55.712 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:55.972 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:55.972 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:55.972 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:55.972 13:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:56.232 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:56.232 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:56.493 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:56.493 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:56.493 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:56.753 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:56.753 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:57.015 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:57.015 [2024-11-25 13:10:36.840725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.015 13:10:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:57.275 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:57.537 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:57.798 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:57.798 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:57.798 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:57.798 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:57.798 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:57.798 13:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:59.709 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:59.709 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:59.709 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:59.969 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:59.969 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:59.969 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:59.969 13:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:59.969 [global] 00:35:59.969 thread=1 00:35:59.969 invalidate=1 00:35:59.969 rw=write 00:35:59.969 time_based=1 00:35:59.969 runtime=1 00:35:59.969 ioengine=libaio 00:35:59.969 direct=1 00:35:59.969 bs=4096 00:35:59.969 iodepth=1 00:35:59.970 norandommap=0 00:35:59.970 numjobs=1 00:35:59.970 00:35:59.970 verify_dump=1 00:35:59.970 verify_backlog=512 00:35:59.970 verify_state_save=0 00:35:59.970 do_verify=1 00:35:59.970 verify=crc32c-intel 00:35:59.970 [job0] 00:35:59.970 filename=/dev/nvme0n1 00:35:59.970 [job1] 00:35:59.970 filename=/dev/nvme0n2 00:35:59.970 [job2] 00:35:59.970 filename=/dev/nvme0n3 00:35:59.970 [job3] 00:35:59.970 filename=/dev/nvme0n4 00:35:59.970 Could not set queue depth (nvme0n1) 00:35:59.970 Could not set queue depth (nvme0n2) 00:35:59.970 Could not set queue depth (nvme0n3) 00:35:59.970 Could not set queue depth (nvme0n4) 00:36:00.231 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:00.231 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:00.231 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:00.231 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:00.231 fio-3.35 00:36:00.231 Starting 4 threads 00:36:01.618 00:36:01.618 job0: (groupid=0, jobs=1): err= 0: pid=926279: Mon Nov 25 13:10:41 2024 00:36:01.618 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1011msec) 00:36:01.618 slat (nsec): min=7747, max=26577, avg=24272.88, stdev=5716.00 00:36:01.618 clat (usec): min=1056, max=42071, avg=39528.28, stdev=9915.07 00:36:01.618 lat (usec): min=1066, max=42097, avg=39552.55, stdev=9918.67 00:36:01.618 clat percentiles (usec): 00:36:01.618 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41681], 20.00th=[41681], 00:36:01.618 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:36:01.618 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:01.618 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:01.618 | 99.99th=[42206] 00:36:01.618 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:36:01.618 slat (nsec): min=9689, max=55961, avg=30241.12, stdev=10010.56 00:36:01.618 clat (usec): min=285, max=1017, avg=623.82, stdev=117.32 00:36:01.618 lat (usec): min=297, max=1052, avg=654.06, stdev=122.08 00:36:01.618 clat percentiles (usec): 00:36:01.618 | 1.00th=[ 355], 5.00th=[ 392], 10.00th=[ 474], 20.00th=[ 519], 00:36:01.618 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 660], 00:36:01.618 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 807], 00:36:01.618 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 1020], 99.95th=[ 1020], 00:36:01.618 | 99.99th=[ 1020] 00:36:01.618 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:36:01.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:01.618 lat (usec) : 500=15.69%, 750=68.43%, 1000=12.48% 00:36:01.618 lat (msec) : 2=0.38%, 50=3.02% 00:36:01.618 cpu : usr=0.79%, sys=1.39%, ctx=534, majf=0, minf=1 00:36:01.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.618 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:01.618 job1: (groupid=0, jobs=1): err= 0: pid=926289: Mon Nov 25 13:10:41 2024 00:36:01.618 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:36:01.618 slat (nsec): min=26179, max=27643, avg=26573.78, stdev=344.97 00:36:01.618 clat (usec): min=775, max=42011, avg=39522.42, stdev=9675.69 00:36:01.618 lat (usec): min=801, max=42038, avg=39548.99, stdev=9675.64 00:36:01.618 clat percentiles (usec): 00:36:01.618 | 1.00th=[ 775], 5.00th=[ 775], 10.00th=[41157], 20.00th=[41157], 00:36:01.618 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:36:01.618 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:01.618 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:01.618 | 99.99th=[42206] 00:36:01.618 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:36:01.618 slat (nsec): min=9390, max=69443, avg=31377.31, stdev=8396.70 00:36:01.618 clat (usec): min=219, max=998, avg=594.11, stdev=122.43 00:36:01.618 lat (usec): min=228, max=1032, avg=625.48, stdev=125.11 00:36:01.618 clat percentiles (usec): 00:36:01.618 | 1.00th=[ 334], 5.00th=[ 396], 10.00th=[ 420], 20.00th=[ 494], 00:36:01.618 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:36:01.618 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 783], 00:36:01.618 | 99.00th=[ 840], 99.50th=[ 898], 99.90th=[ 996], 99.95th=[ 996], 00:36:01.618 | 99.99th=[ 996] 00:36:01.618 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:36:01.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:01.618 lat (usec) : 250=0.19%, 500=21.13%, 750=65.85%, 1000=9.62% 00:36:01.618 lat (msec) : 50=3.21% 00:36:01.618 cpu : usr=1.16%, sys=1.84%, ctx=530, majf=0, minf=2 00:36:01.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.618 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:01.618 job2: (groupid=0, jobs=1): err= 0: pid=926306: Mon Nov 25 13:10:41 2024 00:36:01.618 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:01.618 slat (nsec): min=7819, max=57284, avg=28173.06, stdev=2776.94 00:36:01.618 clat (usec): min=488, max=1135, avg=933.53, stdev=80.07 00:36:01.618 lat (usec): min=516, max=1163, avg=961.70, stdev=79.91 00:36:01.618 clat percentiles (usec): 00:36:01.618 | 1.00th=[ 668], 5.00th=[ 783], 10.00th=[ 832], 20.00th=[ 889], 00:36:01.618 | 30.00th=[ 914], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 963], 00:36:01.618 | 70.00th=[ 971], 80.00th=[ 988], 90.00th=[ 1012], 95.00th=[ 1045], 00:36:01.618 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 1139], 99.95th=[ 1139], 00:36:01.618 | 99.99th=[ 1139] 00:36:01.618 write: IOPS=791, BW=3165KiB/s (3241kB/s)(3168KiB/1001msec); 0 zone resets 00:36:01.618 slat (nsec): min=9452, max=69060, avg=32241.35, stdev=10510.43 00:36:01.618 clat (usec): min=226, max=903, avg=596.23, stdev=117.54 00:36:01.618 lat (usec): min=239, max=941, avg=628.47, stdev=122.30 00:36:01.618 clat percentiles (usec): 00:36:01.618 | 1.00th=[ 326], 5.00th=[ 388], 10.00th=[ 445], 20.00th=[ 494], 00:36:01.618 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:36:01.618 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 783], 00:36:01.618 | 99.00th=[ 857], 99.50th=[ 865], 99.90th=[ 906], 99.95th=[ 906], 00:36:01.618 | 99.99th=[ 906] 00:36:01.618 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:36:01.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:01.618 lat (usec) : 250=0.08%, 500=12.73%, 750=43.71%, 1000=37.12% 00:36:01.618 lat (msec) : 2=6.37% 00:36:01.618 cpu : usr=2.90%, sys=5.10%, ctx=1306, majf=0, minf=1 00:36:01.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.618 issued rwts: total=512,792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:01.618 job3: (groupid=0, jobs=1): err= 0: pid=926313: Mon Nov 25 13:10:41 2024 00:36:01.619 read: IOPS=450, BW=1801KiB/s (1844kB/s)(1808KiB/1004msec) 00:36:01.619 slat (nsec): min=7107, max=61726, avg=24043.24, stdev=8440.61 00:36:01.619 clat (usec): min=492, max=42025, avg=1637.41, stdev=5723.91 00:36:01.619 lat (usec): min=500, max=42053, avg=1661.45, stdev=5724.25 00:36:01.619 clat percentiles (usec): 00:36:01.619 | 1.00th=[ 586], 5.00th=[ 660], 10.00th=[ 693], 20.00th=[ 742], 00:36:01.619 | 30.00th=[ 775], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 840], 00:36:01.619 | 70.00th=[ 865], 80.00th=[ 906], 90.00th=[ 996], 95.00th=[ 1037], 00:36:01.619 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:01.619 | 99.99th=[42206] 00:36:01.619 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:36:01.619 slat (nsec): min=10262, max=56146, avg=28987.35, stdev=11882.28 00:36:01.619 clat (usec): min=204, max=1783, avg=449.54, stdev=101.16 00:36:01.619 lat (usec): min=240, max=1818, avg=478.52, stdev=106.35 00:36:01.619 clat percentiles (usec): 00:36:01.619 | 1.00th=[ 247], 5.00th=[ 302], 10.00th=[ 334], 20.00th=[ 363], 00:36:01.619 | 30.00th=[ 396], 40.00th=[ 445], 50.00th=[ 469], 60.00th=[ 486], 00:36:01.619 | 70.00th=[ 498], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[ 562], 00:36:01.619 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 1778], 99.95th=[ 1778], 00:36:01.619 | 99.99th=[ 1778] 00:36:01.619 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:36:01.619 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:01.619 lat (usec) : 250=0.62%, 500=37.34%, 750=25.10%, 1000=32.47% 00:36:01.619 lat (msec) : 2=3.53%, 50=0.93% 00:36:01.619 cpu : usr=1.30%, sys=2.59%, ctx=966, majf=0, minf=1 00:36:01.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.619 issued rwts: total=452,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:01.619 00:36:01.619 Run status group 0 (all jobs): 00:36:01.619 READ: bw=3857KiB/s (3950kB/s), 67.3KiB/s-2046KiB/s (68.9kB/s-2095kB/s), io=3996KiB (4092kB), run=1001-1036msec 00:36:01.619 WRITE: bw=8988KiB/s (9204kB/s), 1977KiB/s-3165KiB/s (2024kB/s-3241kB/s), io=9312KiB (9535kB), run=1001-1036msec 00:36:01.619 00:36:01.619 Disk stats (read/write): 00:36:01.619 nvme0n1: ios=37/512, merge=0/0, ticks=1426/306, in_queue=1732, util=96.29% 00:36:01.619 nvme0n2: ios=61/512, merge=0/0, ticks=656/232, in_queue=888, util=96.32% 00:36:01.619 nvme0n3: ios=565/512, merge=0/0, ticks=1379/251, in_queue=1630, util=96.61% 00:36:01.619 nvme0n4: ios=469/512, merge=0/0, ticks=1445/223, in_queue=1668, util=96.67% 00:36:01.619 13:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:01.619 [global] 00:36:01.619 thread=1 00:36:01.619 invalidate=1 00:36:01.619 rw=randwrite 00:36:01.619 time_based=1 00:36:01.619 runtime=1 00:36:01.619 ioengine=libaio 00:36:01.619 direct=1 00:36:01.619 bs=4096 00:36:01.619 iodepth=1 00:36:01.619 norandommap=0 00:36:01.619 numjobs=1 00:36:01.619 00:36:01.619 verify_dump=1 00:36:01.619 verify_backlog=512 00:36:01.619 verify_state_save=0 00:36:01.619 do_verify=1 00:36:01.619 verify=crc32c-intel 00:36:01.619 [job0] 00:36:01.619 filename=/dev/nvme0n1 00:36:01.619 [job1] 00:36:01.619 filename=/dev/nvme0n2 00:36:01.619 [job2] 00:36:01.619 filename=/dev/nvme0n3 00:36:01.619 [job3] 00:36:01.619 filename=/dev/nvme0n4 00:36:01.619 Could not set queue depth (nvme0n1) 00:36:01.619 Could not set queue depth (nvme0n2) 00:36:01.619 Could not set queue depth (nvme0n3) 00:36:01.619 Could not set queue depth (nvme0n4) 00:36:01.880 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:01.880 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:01.880 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:01.880 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:01.880 fio-3.35 00:36:01.880 Starting 4 threads 00:36:03.266 00:36:03.266 job0: (groupid=0, jobs=1): err= 0: pid=926744: Mon Nov 25 13:10:43 2024 00:36:03.266 read: IOPS=16, BW=67.0KiB/s (68.6kB/s)(68.0KiB/1015msec) 00:36:03.266 slat (nsec): min=25879, max=29302, avg=26261.41, stdev=797.66 00:36:03.266 clat (usec): min=1184, max=42019, avg=39561.79, stdev=9889.55 00:36:03.266 lat (usec): min=1211, max=42045, avg=39588.06, stdev=9889.50 00:36:03.266 clat percentiles (usec): 00:36:03.266 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41681], 20.00th=[41681], 00:36:03.266 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:36:03.266 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:03.266 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:03.266 | 99.99th=[42206] 00:36:03.266 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:36:03.266 slat (nsec): min=9794, max=80938, avg=29865.50, stdev=10205.27 00:36:03.266 clat (usec): min=185, max=1076, avg=631.52, stdev=130.46 00:36:03.266 lat (usec): min=197, max=1109, avg=661.38, stdev=134.37 00:36:03.266 clat percentiles (usec): 00:36:03.266 | 1.00th=[ 306], 5.00th=[ 408], 10.00th=[ 465], 20.00th=[ 523], 00:36:03.266 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 668], 00:36:03.266 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 824], 00:36:03.266 | 99.00th=[ 938], 99.50th=[ 988], 99.90th=[ 1074], 99.95th=[ 1074], 00:36:03.266 | 99.99th=[ 1074] 00:36:03.266 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:36:03.266 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:03.266 lat (usec) : 250=0.57%, 500=15.50%, 750=65.03%, 1000=15.31% 00:36:03.266 lat (msec) : 2=0.57%, 50=3.02% 00:36:03.266 cpu : usr=0.69%, sys=1.38%, ctx=532, majf=0, minf=1 00:36:03.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.266 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:03.266 job1: (groupid=0, jobs=1): err= 0: pid=926749: Mon Nov 25 13:10:43 2024 00:36:03.266 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:36:03.266 slat (nsec): min=10569, max=31244, avg=26067.22, stdev=4013.93 00:36:03.266 clat (usec): min=1288, max=42053, avg=39561.50, stdev=9557.29 00:36:03.266 lat (usec): min=1299, max=42079, avg=39587.57, stdev=9561.14 00:36:03.266 clat percentiles (usec): 00:36:03.266 | 1.00th=[ 1287], 5.00th=[ 1287], 10.00th=[41157], 20.00th=[41157], 00:36:03.266 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:36:03.266 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:03.266 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:03.266 | 99.99th=[42206] 00:36:03.266 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:36:03.266 slat (nsec): min=8911, max=59399, avg=26923.54, stdev=10826.33 00:36:03.266 clat (usec): min=239, max=945, avg=597.90, stdev=137.39 00:36:03.266 lat (usec): min=264, max=999, avg=624.82, stdev=142.42 00:36:03.266 clat percentiles (usec): 00:36:03.266 | 1.00th=[ 293], 5.00th=[ 359], 10.00th=[ 400], 20.00th=[ 478], 00:36:03.266 | 30.00th=[ 523], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:36:03.266 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 816], 00:36:03.266 | 99.00th=[ 881], 99.50th=[ 906], 99.90th=[ 947], 99.95th=[ 947], 00:36:03.266 | 99.99th=[ 947] 00:36:03.266 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:36:03.266 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:03.266 lat (usec) : 250=0.38%, 500=23.21%, 750=60.57%, 1000=12.45% 00:36:03.266 lat (msec) : 2=0.19%, 50=3.21% 00:36:03.266 cpu : usr=1.26%, sys=1.45%, ctx=530, majf=0, minf=2 00:36:03.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.266 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:03.266 job2: (groupid=0, jobs=1): err= 0: pid=926759: Mon Nov 25 13:10:43 2024 00:36:03.266 read: IOPS=18, BW=73.4KiB/s (75.1kB/s)(76.0KiB/1036msec) 00:36:03.266 slat (nsec): min=11045, max=28323, avg=26897.47, stdev=3844.22 00:36:03.266 clat (usec): min=40853, max=42069, avg=41153.86, stdev=370.75 00:36:03.266 lat (usec): min=40881, max=42097, avg=41180.75, stdev=369.76 00:36:03.266 clat percentiles (usec): 00:36:03.266 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:36:03.266 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:03.266 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:36:03.266 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:03.266 | 99.99th=[42206] 00:36:03.266 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:36:03.266 slat (nsec): min=10078, max=79656, avg=30223.63, stdev=10203.93 00:36:03.266 clat (usec): min=210, max=856, avg=456.93, stdev=97.34 00:36:03.266 lat (usec): min=244, max=890, avg=487.15, stdev=101.52 00:36:03.266 clat percentiles (usec): 00:36:03.266 | 1.00th=[ 253], 5.00th=[ 293], 10.00th=[ 330], 20.00th=[ 367], 00:36:03.266 | 30.00th=[ 408], 40.00th=[ 441], 50.00th=[ 461], 60.00th=[ 486], 00:36:03.266 | 70.00th=[ 506], 80.00th=[ 537], 90.00th=[ 578], 95.00th=[ 611], 00:36:03.266 | 99.00th=[ 693], 99.50th=[ 717], 99.90th=[ 857], 99.95th=[ 857], 00:36:03.266 | 99.99th=[ 857] 00:36:03.266 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:36:03.266 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:03.266 lat (usec) : 250=0.75%, 500=64.22%, 750=31.26%, 1000=0.19% 00:36:03.266 lat (msec) : 50=3.58% 00:36:03.266 cpu : usr=0.77%, sys=1.45%, ctx=532, majf=0, minf=1 00:36:03.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.266 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:03.266 job3: (groupid=0, jobs=1): err= 0: pid=926765: Mon Nov 25 13:10:43 2024 00:36:03.266 read: IOPS=15, BW=62.9KiB/s (64.4kB/s)(64.0KiB/1018msec) 00:36:03.266 slat (nsec): min=26627, max=27563, avg=26886.94, stdev=279.59 00:36:03.266 clat (usec): min=40989, max=42054, avg=41843.38, stdev=326.90 00:36:03.266 lat (usec): min=41016, max=42081, avg=41870.27, stdev=326.64 00:36:03.266 clat percentiles (usec): 00:36:03.266 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:36:03.266 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:36:03.266 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:03.266 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:03.266 | 99.99th=[42206] 00:36:03.266 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:36:03.266 slat (nsec): min=9550, max=59457, avg=29572.34, stdev=10079.62 00:36:03.266 clat (usec): min=259, max=1032, avg=642.86, stdev=131.16 00:36:03.266 lat (usec): min=269, max=1068, avg=672.43, stdev=134.75 00:36:03.266 clat percentiles (usec): 00:36:03.266 | 1.00th=[ 355], 5.00th=[ 424], 10.00th=[ 469], 20.00th=[ 523], 00:36:03.266 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:36:03.266 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 799], 95.00th=[ 848], 00:36:03.266 | 99.00th=[ 979], 99.50th=[ 1012], 99.90th=[ 1037], 99.95th=[ 1037], 00:36:03.266 | 99.99th=[ 1037] 00:36:03.266 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:36:03.266 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:03.266 lat (usec) : 500=16.10%, 750=62.69%, 1000=17.61% 00:36:03.266 lat (msec) : 2=0.57%, 50=3.03% 00:36:03.266 cpu : usr=1.18%, sys=1.87%, ctx=528, majf=0, minf=1 00:36:03.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.266 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:03.266 00:36:03.266 Run status group 0 (all jobs): 00:36:03.266 READ: bw=270KiB/s (277kB/s), 62.9KiB/s-73.4KiB/s (64.4kB/s-75.1kB/s), io=280KiB (287kB), run=1015-1036msec 00:36:03.266 WRITE: bw=7907KiB/s (8097kB/s), 1977KiB/s-2018KiB/s (2024kB/s-2066kB/s), io=8192KiB (8389kB), run=1015-1036msec 00:36:03.266 00:36:03.266 Disk stats (read/write): 00:36:03.266 nvme0n1: ios=64/512, merge=0/0, ticks=963/311, in_queue=1274, util=96.59% 00:36:03.266 nvme0n2: ios=46/512, merge=0/0, ticks=539/259, in_queue=798, util=86.75% 00:36:03.266 nvme0n3: ios=64/512, merge=0/0, ticks=759/218, in_queue=977, util=97.15% 00:36:03.266 nvme0n4: ios=11/512, merge=0/0, ticks=460/271, in_queue=731, util=89.51% 00:36:03.266 13:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:03.266 [global] 00:36:03.266 thread=1 00:36:03.266 invalidate=1 00:36:03.266 rw=write 00:36:03.266 time_based=1 00:36:03.266 runtime=1 00:36:03.266 ioengine=libaio 00:36:03.266 direct=1 00:36:03.266 bs=4096 00:36:03.266 iodepth=128 00:36:03.267 norandommap=0 00:36:03.267 numjobs=1 00:36:03.267 00:36:03.267 verify_dump=1 00:36:03.267 verify_backlog=512 00:36:03.267 verify_state_save=0 00:36:03.267 do_verify=1 00:36:03.267 verify=crc32c-intel 00:36:03.267 [job0] 00:36:03.267 filename=/dev/nvme0n1 00:36:03.267 [job1] 00:36:03.267 filename=/dev/nvme0n2 00:36:03.267 [job2] 00:36:03.267 filename=/dev/nvme0n3 00:36:03.267 [job3] 00:36:03.267 filename=/dev/nvme0n4 00:36:03.267 Could not set queue depth (nvme0n1) 00:36:03.267 Could not set queue depth (nvme0n2) 00:36:03.267 Could not set queue depth (nvme0n3) 00:36:03.267 Could not set queue depth (nvme0n4) 00:36:03.836 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:03.836 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:03.836 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:03.836 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:03.836 fio-3.35 00:36:03.836 Starting 4 threads 00:36:04.779 00:36:04.779 job0: (groupid=0, jobs=1): err= 0: pid=927248: Mon Nov 25 13:10:44 2024 00:36:04.779 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:36:04.779 slat (nsec): min=896, max=17002k, avg=76944.18, stdev=658731.48 00:36:04.779 clat (usec): min=1308, max=47174, avg=10772.13, stdev=6953.21 00:36:04.779 lat (usec): min=1319, max=53272, avg=10849.07, stdev=7002.82 00:36:04.779 clat percentiles (usec): 00:36:04.779 | 1.00th=[ 2737], 5.00th=[ 4817], 10.00th=[ 6718], 20.00th=[ 7570], 00:36:04.779 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9110], 00:36:04.779 | 70.00th=[10552], 80.00th=[12387], 90.00th=[15795], 95.00th=[23725], 00:36:04.779 | 99.00th=[44303], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:36:04.779 | 99.99th=[46924] 00:36:04.779 write: IOPS=6348, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1004msec); 0 zone resets 00:36:04.779 slat (nsec): min=1541, max=14151k, avg=71313.46, stdev=566379.62 00:36:04.779 clat (usec): min=664, max=36101, avg=9498.93, stdev=5376.19 00:36:04.779 lat (usec): min=820, max=36468, avg=9570.25, stdev=5410.53 00:36:04.779 clat percentiles (usec): 00:36:04.779 | 1.00th=[ 1319], 5.00th=[ 2966], 10.00th=[ 4621], 20.00th=[ 6128], 00:36:04.779 | 30.00th=[ 6718], 40.00th=[ 7767], 50.00th=[ 8356], 60.00th=[ 9241], 00:36:04.779 | 70.00th=[ 9896], 80.00th=[11863], 90.00th=[15795], 95.00th=[21627], 00:36:04.779 | 99.00th=[31589], 99.50th=[33817], 99.90th=[34341], 99.95th=[35914], 00:36:04.779 | 99.99th=[35914] 00:36:04.779 bw ( KiB/s): min=22248, max=27720, per=31.80%, avg=24984.00, stdev=3869.29, samples=2 00:36:04.779 iops : min= 5562, max= 6930, avg=6246.00, stdev=967.32, samples=2 00:36:04.779 lat (usec) : 750=0.02%, 1000=0.16% 00:36:04.779 lat (msec) : 2=1.32%, 4=3.53%, 10=64.13%, 20=24.09%, 50=6.76% 00:36:04.779 cpu : usr=4.59%, sys=5.38%, ctx=398, majf=0, minf=1 00:36:04.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:36:04.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:04.779 issued rwts: total=6144,6374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:04.779 job1: (groupid=0, jobs=1): err= 0: pid=927249: Mon Nov 25 13:10:44 2024 00:36:04.779 read: IOPS=4522, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1005msec) 00:36:04.779 slat (nsec): min=887, max=16309k, avg=141705.18, stdev=995214.10 00:36:04.779 clat (usec): min=1466, max=47451, avg=18183.65, stdev=13203.88 00:36:04.779 lat (usec): min=4332, max=47460, avg=18325.35, stdev=13271.40 00:36:04.779 clat percentiles (usec): 00:36:04.780 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7111], 00:36:04.780 | 30.00th=[ 7570], 40.00th=[ 8455], 50.00th=[ 9765], 60.00th=[17433], 00:36:04.780 | 70.00th=[26346], 80.00th=[32900], 90.00th=[39060], 95.00th=[43779], 00:36:04.780 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:36:04.780 | 99.99th=[47449] 00:36:04.780 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:36:04.780 slat (nsec): min=1526, max=7312.1k, avg=72972.65, stdev=416480.30 00:36:04.780 clat (usec): min=2822, max=26518, avg=9665.10, stdev=4149.23 00:36:04.780 lat (usec): min=2825, max=26527, avg=9738.07, stdev=4162.81 00:36:04.780 clat percentiles (usec): 00:36:04.780 | 1.00th=[ 4015], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6456], 00:36:04.780 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 8225], 60.00th=[ 9241], 00:36:04.780 | 70.00th=[11731], 80.00th=[13435], 90.00th=[15795], 95.00th=[17695], 00:36:04.780 | 99.00th=[22414], 99.50th=[22676], 99.90th=[26608], 99.95th=[26608], 00:36:04.780 | 99.99th=[26608] 00:36:04.780 bw ( KiB/s): min=12032, max=24832, per=23.46%, avg=18432.00, stdev=9050.97, samples=2 00:36:04.780 iops : min= 3008, max= 6208, avg=4608.00, stdev=2262.74, samples=2 00:36:04.780 lat (msec) : 2=0.01%, 4=0.49%, 10=56.50%, 20=24.18%, 50=18.82% 00:36:04.780 cpu : usr=3.09%, sys=3.78%, ctx=462, majf=0, minf=1 00:36:04.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:36:04.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:04.780 issued rwts: total=4545,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:04.780 job2: (groupid=0, jobs=1): err= 0: pid=927254: Mon Nov 25 13:10:44 2024 00:36:04.780 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:36:04.780 slat (nsec): min=917, max=17278k, avg=130534.68, stdev=972066.12 00:36:04.780 clat (usec): min=2929, max=54786, avg=15878.24, stdev=8930.03 00:36:04.780 lat (usec): min=2939, max=54814, avg=16008.77, stdev=9020.98 00:36:04.780 clat percentiles (usec): 00:36:04.780 | 1.00th=[ 5080], 5.00th=[ 7373], 10.00th=[ 8848], 20.00th=[ 9896], 00:36:04.780 | 30.00th=[11076], 40.00th=[11600], 50.00th=[13304], 60.00th=[15139], 00:36:04.780 | 70.00th=[17695], 80.00th=[19530], 90.00th=[23987], 95.00th=[34866], 00:36:04.780 | 99.00th=[51119], 99.50th=[51119], 99.90th=[54264], 99.95th=[54264], 00:36:04.780 | 99.99th=[54789] 00:36:04.780 write: IOPS=3621, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1004msec); 0 zone resets 00:36:04.780 slat (nsec): min=1590, max=16070k, avg=132147.87, stdev=737517.28 00:36:04.780 clat (usec): min=1219, max=69071, avg=19364.79, stdev=13069.96 00:36:04.780 lat (usec): min=1254, max=69080, avg=19496.94, stdev=13151.55 00:36:04.780 clat percentiles (usec): 00:36:04.780 | 1.00th=[ 5014], 5.00th=[ 5866], 10.00th=[ 7635], 20.00th=[ 9110], 00:36:04.780 | 30.00th=[10028], 40.00th=[14615], 50.00th=[16712], 60.00th=[19530], 00:36:04.780 | 70.00th=[21627], 80.00th=[24249], 90.00th=[34866], 95.00th=[50594], 00:36:04.780 | 99.00th=[67634], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:36:04.780 | 99.99th=[68682] 00:36:04.780 bw ( KiB/s): min=12288, max=16384, per=18.25%, avg=14336.00, stdev=2896.31, samples=2 00:36:04.780 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:36:04.780 lat (msec) : 2=0.03%, 4=0.44%, 10=24.65%, 20=48.41%, 50=22.63% 00:36:04.780 lat (msec) : 100=3.84% 00:36:04.780 cpu : usr=2.99%, sys=3.49%, ctx=352, majf=0, minf=2 00:36:04.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:36:04.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:04.780 issued rwts: total=3584,3636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:04.780 job3: (groupid=0, jobs=1): err= 0: pid=927258: Mon Nov 25 13:10:44 2024 00:36:04.780 read: IOPS=5048, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1004msec) 00:36:04.780 slat (nsec): min=939, max=12793k, avg=92977.63, stdev=629982.30 00:36:04.780 clat (usec): min=3504, max=39785, avg=11699.65, stdev=5461.00 00:36:04.780 lat (usec): min=3954, max=39792, avg=11792.62, stdev=5495.17 00:36:04.780 clat percentiles (usec): 00:36:04.780 | 1.00th=[ 5800], 5.00th=[ 6915], 10.00th=[ 7242], 20.00th=[ 7767], 00:36:04.780 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11076], 00:36:04.780 | 70.00th=[12256], 80.00th=[13960], 90.00th=[17695], 95.00th=[21103], 00:36:04.780 | 99.00th=[35914], 99.50th=[36439], 99.90th=[39060], 99.95th=[39584], 00:36:04.780 | 99.99th=[39584] 00:36:04.780 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:36:04.780 slat (nsec): min=1617, max=12914k, avg=98269.38, stdev=520502.56 00:36:04.780 clat (usec): min=1546, max=39790, avg=13160.53, stdev=7691.11 00:36:04.780 lat (usec): min=1557, max=39799, avg=13258.80, stdev=7737.25 00:36:04.780 clat percentiles (usec): 00:36:04.780 | 1.00th=[ 3982], 5.00th=[ 5407], 10.00th=[ 6980], 20.00th=[ 7635], 00:36:04.780 | 30.00th=[ 8291], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[11600], 00:36:04.780 | 70.00th=[13435], 80.00th=[19530], 90.00th=[24511], 95.00th=[31065], 00:36:04.780 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:36:04.780 | 99.99th=[39584] 00:36:04.780 bw ( KiB/s): min=18064, max=22896, per=26.07%, avg=20480.00, stdev=3416.74, samples=2 00:36:04.780 iops : min= 4516, max= 5724, avg=5120.00, stdev=854.18, samples=2 00:36:04.780 lat (msec) : 2=0.06%, 4=0.63%, 10=48.53%, 20=38.34%, 50=12.44% 00:36:04.780 cpu : usr=3.39%, sys=4.79%, ctx=509, majf=0, minf=1 00:36:04.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:04.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:04.780 issued rwts: total=5069,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:04.780 00:36:04.780 Run status group 0 (all jobs): 00:36:04.780 READ: bw=75.2MiB/s (78.8MB/s), 13.9MiB/s-23.9MiB/s (14.6MB/s-25.1MB/s), io=75.6MiB (79.2MB), run=1004-1005msec 00:36:04.780 WRITE: bw=76.7MiB/s (80.4MB/s), 14.1MiB/s-24.8MiB/s (14.8MB/s-26.0MB/s), io=77.1MiB (80.8MB), run=1004-1005msec 00:36:04.780 00:36:04.780 Disk stats (read/write): 00:36:04.780 nvme0n1: ios=5687/6144, merge=0/0, ticks=46121/42901, in_queue=89022, util=95.79% 00:36:04.780 nvme0n2: ios=4132/4160, merge=0/0, ticks=20312/12050, in_queue=32362, util=91.84% 00:36:04.780 nvme0n3: ios=3072/3191, merge=0/0, ticks=27610/33113, in_queue=60723, util=88.36% 00:36:04.780 nvme0n4: ios=4153/4255, merge=0/0, ticks=31089/36186, in_queue=67275, util=96.47% 00:36:04.780 13:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:36:05.040 [global] 00:36:05.040 thread=1 00:36:05.040 invalidate=1 00:36:05.040 rw=randwrite 00:36:05.040 time_based=1 00:36:05.040 runtime=1 00:36:05.040 ioengine=libaio 00:36:05.040 direct=1 00:36:05.040 bs=4096 00:36:05.040 iodepth=128 00:36:05.040 norandommap=0 00:36:05.040 numjobs=1 00:36:05.040 00:36:05.040 verify_dump=1 00:36:05.040 verify_backlog=512 00:36:05.040 verify_state_save=0 00:36:05.040 do_verify=1 00:36:05.040 verify=crc32c-intel 00:36:05.040 [job0] 00:36:05.040 filename=/dev/nvme0n1 00:36:05.040 [job1] 00:36:05.040 filename=/dev/nvme0n2 00:36:05.040 [job2] 00:36:05.040 filename=/dev/nvme0n3 00:36:05.040 [job3] 00:36:05.040 filename=/dev/nvme0n4 00:36:05.040 Could not set queue depth (nvme0n1) 00:36:05.040 Could not set queue depth (nvme0n2) 00:36:05.040 Could not set queue depth (nvme0n3) 00:36:05.040 Could not set queue depth (nvme0n4) 00:36:05.301 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:05.301 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:05.301 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:05.301 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:05.301 fio-3.35 00:36:05.301 Starting 4 threads 00:36:06.685 00:36:06.685 job0: (groupid=0, jobs=1): err= 0: pid=927771: Mon Nov 25 13:10:46 2024 00:36:06.685 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:36:06.685 slat (nsec): min=895, max=19396k, avg=73675.86, stdev=583344.75 00:36:06.685 clat (usec): min=1917, max=49441, avg=9462.04, stdev=5242.47 00:36:06.685 lat (usec): min=1919, max=56021, avg=9535.72, stdev=5299.08 00:36:06.685 clat percentiles (usec): 00:36:06.685 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 6259], 20.00th=[ 6915], 00:36:06.685 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7898], 60.00th=[ 8225], 00:36:06.685 | 70.00th=[ 8848], 80.00th=[10028], 90.00th=[16188], 95.00th=[19530], 00:36:06.685 | 99.00th=[31327], 99.50th=[35390], 99.90th=[46400], 99.95th=[46400], 00:36:06.685 | 99.99th=[49546] 00:36:06.685 write: IOPS=7010, BW=27.4MiB/s (28.7MB/s)(27.5MiB/1004msec); 0 zone resets 00:36:06.685 slat (nsec): min=1536, max=7312.7k, avg=64696.40, stdev=402482.04 00:36:06.685 clat (usec): min=703, max=78290, avg=9143.38, stdev=8642.93 00:36:06.685 lat (usec): min=735, max=78319, avg=9208.07, stdev=8701.92 00:36:06.685 clat percentiles (usec): 00:36:06.685 | 1.00th=[ 2704], 5.00th=[ 4555], 10.00th=[ 5800], 20.00th=[ 6521], 00:36:06.685 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7635], 00:36:06.685 | 70.00th=[ 8094], 80.00th=[ 8717], 90.00th=[11076], 95.00th=[16909], 00:36:06.685 | 99.00th=[62653], 99.50th=[71828], 99.90th=[77071], 99.95th=[77071], 00:36:06.685 | 99.99th=[78119] 00:36:06.685 bw ( KiB/s): min=22672, max=32624, per=29.76%, avg=27648.00, stdev=7037.13, samples=2 00:36:06.685 iops : min= 5668, max= 8156, avg=6912.00, stdev=1759.28, samples=2 00:36:06.685 lat (usec) : 750=0.01% 00:36:06.685 lat (msec) : 2=0.28%, 4=1.44%, 10=83.05%, 20=10.87%, 50=3.43% 00:36:06.685 lat (msec) : 100=0.93% 00:36:06.685 cpu : usr=3.69%, sys=7.08%, ctx=596, majf=0, minf=1 00:36:06.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:36:06.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:06.685 issued rwts: total=6656,7039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.685 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:06.685 job1: (groupid=0, jobs=1): err= 0: pid=927772: Mon Nov 25 13:10:46 2024 00:36:06.685 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:36:06.685 slat (nsec): min=893, max=21066k, avg=102777.23, stdev=858875.34 00:36:06.685 clat (usec): min=5340, max=56823, avg=13346.92, stdev=6797.00 00:36:06.685 lat (usec): min=5346, max=56847, avg=13449.70, stdev=6867.93 00:36:06.685 clat percentiles (usec): 00:36:06.685 | 1.00th=[ 6521], 5.00th=[ 7635], 10.00th=[ 8291], 20.00th=[ 8586], 00:36:06.685 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[12256], 60.00th=[13173], 00:36:06.685 | 70.00th=[14353], 80.00th=[15795], 90.00th=[18220], 95.00th=[27132], 00:36:06.685 | 99.00th=[41681], 99.50th=[46400], 99.90th=[46400], 99.95th=[48497], 00:36:06.685 | 99.99th=[56886] 00:36:06.685 write: IOPS=3795, BW=14.8MiB/s (15.5MB/s)(15.0MiB/1011msec); 0 zone resets 00:36:06.685 slat (nsec): min=1524, max=15741k, avg=157805.76, stdev=898909.20 00:36:06.685 clat (msec): min=2, max=112, avg=20.66, stdev=23.06 00:36:06.685 lat (msec): min=2, max=112, avg=20.82, stdev=23.22 00:36:06.685 clat percentiles (msec): 00:36:06.685 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:36:06.685 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 15], 00:36:06.685 | 70.00th=[ 20], 80.00th=[ 27], 90.00th=[ 39], 95.00th=[ 91], 00:36:06.685 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 113], 99.95th=[ 113], 00:36:06.685 | 99.99th=[ 113] 00:36:06.685 bw ( KiB/s): min=13048, max=16657, per=15.98%, avg=14852.50, stdev=2551.95, samples=2 00:36:06.685 iops : min= 3262, max= 4164, avg=3713.00, stdev=637.81, samples=2 00:36:06.685 lat (msec) : 4=0.15%, 10=34.60%, 20=46.05%, 50=15.01%, 100=2.49% 00:36:06.685 lat (msec) : 250=1.70% 00:36:06.685 cpu : usr=2.77%, sys=3.76%, ctx=360, majf=0, minf=1 00:36:06.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:36:06.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:06.686 issued rwts: total=3584,3837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:06.686 job2: (groupid=0, jobs=1): err= 0: pid=927773: Mon Nov 25 13:10:46 2024 00:36:06.686 read: IOPS=4735, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1007msec) 00:36:06.686 slat (nsec): min=979, max=11565k, avg=90476.86, stdev=665855.74 00:36:06.686 clat (usec): min=1732, max=32254, avg=11338.09, stdev=4218.51 00:36:06.686 lat (usec): min=4286, max=35184, avg=11428.56, stdev=4265.49 00:36:06.686 clat percentiles (usec): 00:36:06.686 | 1.00th=[ 4883], 5.00th=[ 6390], 10.00th=[ 7701], 20.00th=[ 8586], 00:36:06.686 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10814], 00:36:06.686 | 70.00th=[11731], 80.00th=[13435], 90.00th=[18220], 95.00th=[20317], 00:36:06.686 | 99.00th=[24773], 99.50th=[25560], 99.90th=[27395], 99.95th=[32375], 00:36:06.686 | 99.99th=[32375] 00:36:06.686 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:36:06.686 slat (nsec): min=1628, max=15596k, avg=104596.46, stdev=763776.01 00:36:06.686 clat (usec): min=669, max=77946, avg=14341.32, stdev=12317.81 00:36:06.686 lat (usec): min=809, max=77955, avg=14445.91, stdev=12399.32 00:36:06.686 clat percentiles (usec): 00:36:06.686 | 1.00th=[ 3752], 5.00th=[ 5800], 10.00th=[ 6980], 20.00th=[ 7898], 00:36:06.686 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[11207], 00:36:06.686 | 70.00th=[12780], 80.00th=[17171], 90.00th=[26870], 95.00th=[40109], 00:36:06.686 | 99.00th=[74974], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:36:06.686 | 99.99th=[78119] 00:36:06.686 bw ( KiB/s): min=16384, max=24576, per=22.04%, avg=20480.00, stdev=5792.62, samples=2 00:36:06.686 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:36:06.686 lat (usec) : 750=0.01%, 1000=0.03% 00:36:06.686 lat (msec) : 2=0.11%, 4=0.52%, 10=49.95%, 20=38.59%, 50=9.27% 00:36:06.686 lat (msec) : 100=1.52% 00:36:06.686 cpu : usr=3.58%, sys=5.27%, ctx=359, majf=0, minf=1 00:36:06.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:36:06.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:06.686 issued rwts: total=4769,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:06.686 job3: (groupid=0, jobs=1): err= 0: pid=927775: Mon Nov 25 13:10:46 2024 00:36:06.686 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:36:06.686 slat (nsec): min=926, max=10857k, avg=67483.26, stdev=481604.53 00:36:06.686 clat (usec): min=2735, max=21123, avg=9127.56, stdev=2165.56 00:36:06.686 lat (usec): min=2776, max=21195, avg=9195.04, stdev=2191.01 00:36:06.686 clat percentiles (usec): 00:36:06.686 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 7111], 20.00th=[ 7898], 00:36:06.686 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8848], 00:36:06.686 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11863], 95.00th=[12649], 00:36:06.686 | 99.00th=[17171], 99.50th=[17695], 99.90th=[20317], 99.95th=[20317], 00:36:06.686 | 99.99th=[21103] 00:36:06.686 write: IOPS=7458, BW=29.1MiB/s (30.5MB/s)(29.2MiB/1004msec); 0 zone resets 00:36:06.686 slat (nsec): min=1584, max=9385.5k, avg=62547.25, stdev=389786.62 00:36:06.686 clat (usec): min=627, max=20961, avg=8282.64, stdev=2482.63 00:36:06.686 lat (usec): min=1103, max=20977, avg=8345.18, stdev=2505.10 00:36:06.686 clat percentiles (usec): 00:36:06.686 | 1.00th=[ 3228], 5.00th=[ 4948], 10.00th=[ 6325], 20.00th=[ 6980], 00:36:06.686 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8094], 00:36:06.686 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10945], 95.00th=[12780], 00:36:06.686 | 99.00th=[18482], 99.50th=[19792], 99.90th=[20841], 99.95th=[20841], 00:36:06.686 | 99.99th=[20841] 00:36:06.686 bw ( KiB/s): min=28729, max=30208, per=31.72%, avg=29468.50, stdev=1045.81, samples=2 00:36:06.686 iops : min= 7182, max= 7552, avg=7367.00, stdev=261.63, samples=2 00:36:06.686 lat (usec) : 750=0.01% 00:36:06.686 lat (msec) : 2=0.26%, 4=1.00%, 10=81.12%, 20=17.32%, 50=0.29% 00:36:06.686 cpu : usr=3.89%, sys=6.58%, ctx=740, majf=0, minf=2 00:36:06.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:36:06.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:06.686 issued rwts: total=7168,7488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:06.686 00:36:06.686 Run status group 0 (all jobs): 00:36:06.686 READ: bw=85.7MiB/s (89.8MB/s), 13.8MiB/s-27.9MiB/s (14.5MB/s-29.2MB/s), io=86.6MiB (90.8MB), run=1004-1011msec 00:36:06.686 WRITE: bw=90.7MiB/s (95.1MB/s), 14.8MiB/s-29.1MiB/s (15.5MB/s-30.5MB/s), io=91.7MiB (96.2MB), run=1004-1011msec 00:36:06.686 00:36:06.686 Disk stats (read/write): 00:36:06.686 nvme0n1: ios=6194/6159, merge=0/0, ticks=32236/24825, in_queue=57061, util=87.68% 00:36:06.686 nvme0n2: ios=2917/3072, merge=0/0, ticks=31678/57517, in_queue=89195, util=87.67% 00:36:06.686 nvme0n3: ios=4118/4415, merge=0/0, ticks=30492/35921, in_queue=66413, util=96.20% 00:36:06.686 nvme0n4: ios=6001/6144, merge=0/0, ticks=35610/29732, in_queue=65342, util=90.48% 00:36:06.686 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:36:06.686 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=928108 00:36:06.686 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:36:06.686 13:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:36:06.686 [global] 00:36:06.686 thread=1 00:36:06.686 invalidate=1 00:36:06.686 rw=read 00:36:06.686 time_based=1 00:36:06.686 runtime=10 00:36:06.686 ioengine=libaio 00:36:06.686 direct=1 00:36:06.686 bs=4096 00:36:06.686 iodepth=1 00:36:06.686 norandommap=1 00:36:06.686 numjobs=1 00:36:06.686 00:36:06.686 [job0] 00:36:06.686 filename=/dev/nvme0n1 00:36:06.686 [job1] 00:36:06.686 filename=/dev/nvme0n2 00:36:06.686 [job2] 00:36:06.686 filename=/dev/nvme0n3 00:36:06.686 [job3] 00:36:06.686 filename=/dev/nvme0n4 00:36:06.686 Could not set queue depth (nvme0n1) 00:36:06.686 Could not set queue depth (nvme0n2) 00:36:06.686 Could not set queue depth (nvme0n3) 00:36:06.686 Could not set queue depth (nvme0n4) 00:36:06.947 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:06.947 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:06.947 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:06.947 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:06.947 fio-3.35 00:36:06.947 Starting 4 threads 00:36:09.489 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:36:09.750 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:36:09.750 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2023424, buflen=4096 00:36:09.750 fio: pid=928295, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:10.012 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=286720, buflen=4096 00:36:10.012 fio: pid=928294, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:10.012 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:10.012 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:36:10.012 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:10.012 13:10:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:36:10.273 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=528384, buflen=4096 00:36:10.273 fio: pid=928292, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:10.273 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:10.273 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:36:10.273 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1114112, buflen=4096 00:36:10.273 fio: pid=928293, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:36:10.273 00:36:10.273 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=928292: Mon Nov 25 13:10:50 2024 00:36:10.273 read: IOPS=43, BW=172KiB/s (177kB/s)(516KiB/2992msec) 00:36:10.273 slat (usec): min=6, max=126, avg=26.86, stdev=14.21 00:36:10.273 clat (usec): min=709, max=41159, avg=22990.17, stdev=19967.73 00:36:10.273 lat (usec): min=758, max=41185, avg=23017.03, stdev=19970.27 00:36:10.273 clat percentiles (usec): 00:36:10.273 | 1.00th=[ 734], 5.00th=[ 848], 10.00th=[ 906], 20.00th=[ 979], 00:36:10.273 | 30.00th=[ 1037], 40.00th=[ 1090], 50.00th=[40633], 60.00th=[41157], 00:36:10.273 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:10.273 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:36:10.273 | 99.99th=[41157] 00:36:10.273 bw ( KiB/s): min= 96, max= 536, per=15.37%, avg=187.20, stdev=195.03, samples=5 00:36:10.273 iops : min= 24, max= 134, avg=46.80, stdev=48.76, samples=5 00:36:10.273 lat (usec) : 750=1.54%, 1000=21.54% 00:36:10.273 lat (msec) : 2=21.54%, 50=54.62% 00:36:10.273 cpu : usr=0.00%, sys=0.20%, ctx=133, majf=0, minf=1 00:36:10.273 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.273 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.273 issued rwts: total=130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.273 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:10.273 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=928293: Mon Nov 25 13:10:50 2024 00:36:10.273 read: IOPS=86, BW=343KiB/s (351kB/s)(1088KiB/3173msec) 00:36:10.273 slat (usec): min=6, max=7539, avg=55.15, stdev=454.76 00:36:10.274 clat (usec): min=699, max=43151, avg=11521.16, stdev=17621.73 00:36:10.274 lat (usec): min=724, max=43178, avg=11548.79, stdev=17622.64 00:36:10.274 clat percentiles (usec): 00:36:10.274 | 1.00th=[ 766], 5.00th=[ 906], 10.00th=[ 938], 20.00th=[ 979], 00:36:10.274 | 30.00th=[ 1012], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[ 1123], 00:36:10.274 | 70.00th=[ 1188], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:10.274 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:36:10.274 | 99.99th=[43254] 00:36:10.274 bw ( KiB/s): min= 96, max= 1032, per=29.35%, avg=357.33, stdev=411.35, samples=6 00:36:10.274 iops : min= 24, max= 258, avg=89.33, stdev=102.84, samples=6 00:36:10.274 lat (usec) : 750=0.73%, 1000=24.91% 00:36:10.274 lat (msec) : 2=47.62%, 10=0.37%, 50=26.01% 00:36:10.274 cpu : usr=0.16%, sys=0.28%, ctx=276, majf=0, minf=2 00:36:10.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.274 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.274 issued rwts: total=273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:10.274 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=928294: Mon Nov 25 13:10:50 2024 00:36:10.274 read: IOPS=25, BW=100KiB/s (103kB/s)(280KiB/2789msec) 00:36:10.274 slat (usec): min=11, max=13694, avg=219.05, stdev=1622.13 00:36:10.274 clat (usec): min=766, max=41915, avg=39300.59, stdev=8185.05 00:36:10.274 lat (usec): min=793, max=55017, avg=39522.38, stdev=8395.13 00:36:10.274 clat percentiles (usec): 00:36:10.274 | 1.00th=[ 766], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:10.274 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:10.274 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:10.274 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:36:10.274 | 99.99th=[41681] 00:36:10.274 bw ( KiB/s): min= 96, max= 112, per=8.22%, avg=100.80, stdev= 7.16, samples=5 00:36:10.274 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:36:10.274 lat (usec) : 1000=4.23% 00:36:10.274 lat (msec) : 50=94.37% 00:36:10.274 cpu : usr=0.11%, sys=0.00%, ctx=72, majf=0, minf=2 00:36:10.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.274 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.274 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:10.274 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=928295: Mon Nov 25 13:10:50 2024 00:36:10.274 read: IOPS=189, BW=758KiB/s (776kB/s)(1976KiB/2607msec) 00:36:10.274 slat (nsec): min=6694, max=68490, avg=25464.33, stdev=5741.07 00:36:10.274 clat (usec): min=324, max=42662, avg=5188.71, stdev=12803.54 00:36:10.274 lat (usec): min=351, max=42688, avg=5214.17, stdev=12803.91 00:36:10.274 clat percentiles (usec): 00:36:10.274 | 1.00th=[ 453], 5.00th=[ 553], 10.00th=[ 578], 20.00th=[ 635], 00:36:10.274 | 30.00th=[ 685], 40.00th=[ 758], 50.00th=[ 799], 60.00th=[ 840], 00:36:10.274 | 70.00th=[ 873], 80.00th=[ 906], 90.00th=[41157], 95.00th=[42206], 00:36:10.274 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:10.274 | 99.99th=[42730] 00:36:10.274 bw ( KiB/s): min= 224, max= 2136, per=64.45%, avg=784.00, stdev=782.96, samples=5 00:36:10.274 iops : min= 56, max= 534, avg=196.00, stdev=195.74, samples=5 00:36:10.274 lat (usec) : 500=1.82%, 750=36.77%, 1000=49.29% 00:36:10.274 lat (msec) : 2=1.21%, 50=10.71% 00:36:10.274 cpu : usr=0.23%, sys=0.54%, ctx=496, majf=0, minf=2 00:36:10.274 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:10.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.274 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:10.274 issued rwts: total=495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:10.274 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:10.274 00:36:10.274 Run status group 0 (all jobs): 00:36:10.274 READ: bw=1217KiB/s (1246kB/s), 100KiB/s-758KiB/s (103kB/s-776kB/s), io=3860KiB (3953kB), run=2607-3173msec 00:36:10.274 00:36:10.274 Disk stats (read/write): 00:36:10.274 nvme0n1: ios=125/0, merge=0/0, ticks=2803/0, in_queue=2803, util=94.76% 00:36:10.274 nvme0n2: ios=270/0, merge=0/0, ticks=3041/0, in_queue=3041, util=95.66% 00:36:10.274 nvme0n3: ios=65/0, merge=0/0, ticks=2548/0, in_queue=2548, util=95.99% 00:36:10.274 nvme0n4: ios=494/0, merge=0/0, ticks=2557/0, in_queue=2557, util=96.42% 00:36:10.536 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:10.536 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:36:10.797 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:10.797 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:36:10.797 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:10.797 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:36:11.058 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:36:11.058 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:36:11.321 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:36:11.321 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 928108 00:36:11.321 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:36:11.321 13:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:11.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:11.321 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:11.321 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:36:11.321 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:11.321 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:11.321 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:11.321 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:11.321 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:36:11.321 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:36:11.321 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:36:11.321 nvmf hotplug test: fio failed as expected 00:36:11.321 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:11.582 rmmod nvme_tcp 00:36:11.582 rmmod nvme_fabrics 00:36:11.582 rmmod nvme_keyring 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 924941 ']' 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 924941 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 924941 ']' 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 924941 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 924941 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 924941' 00:36:11.582 killing process with pid 924941 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 924941 00:36:11.582 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 924941 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.843 13:10:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:14.393 00:36:14.393 real 0m28.839s 00:36:14.393 user 2m16.725s 00:36:14.393 sys 0m12.339s 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:14.393 ************************************ 00:36:14.393 END TEST nvmf_fio_target 00:36:14.393 ************************************ 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:14.393 ************************************ 00:36:14.393 START TEST nvmf_bdevio 00:36:14.393 ************************************ 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:14.393 * Looking for test storage... 00:36:14.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.393 --rc genhtml_branch_coverage=1 00:36:14.393 --rc genhtml_function_coverage=1 00:36:14.393 --rc genhtml_legend=1 00:36:14.393 --rc geninfo_all_blocks=1 00:36:14.393 --rc geninfo_unexecuted_blocks=1 00:36:14.393 00:36:14.393 ' 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.393 --rc genhtml_branch_coverage=1 00:36:14.393 --rc genhtml_function_coverage=1 00:36:14.393 --rc genhtml_legend=1 00:36:14.393 --rc geninfo_all_blocks=1 00:36:14.393 --rc geninfo_unexecuted_blocks=1 00:36:14.393 00:36:14.393 ' 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.393 --rc genhtml_branch_coverage=1 00:36:14.393 --rc genhtml_function_coverage=1 00:36:14.393 --rc genhtml_legend=1 00:36:14.393 --rc geninfo_all_blocks=1 00:36:14.393 --rc geninfo_unexecuted_blocks=1 00:36:14.393 00:36:14.393 ' 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:14.393 --rc genhtml_branch_coverage=1 00:36:14.393 --rc genhtml_function_coverage=1 00:36:14.393 --rc genhtml_legend=1 00:36:14.393 --rc geninfo_all_blocks=1 00:36:14.393 --rc geninfo_unexecuted_blocks=1 00:36:14.393 00:36:14.393 ' 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.393 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:14.394 13:10:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:22.535 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:22.535 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:22.535 Found net devices under 0000:31:00.0: cvl_0_0 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:22.535 Found net devices under 0000:31:00.1: cvl_0_1 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:22.535 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:22.536 13:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:22.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:36:22.536 00:36:22.536 --- 10.0.0.2 ping statistics --- 00:36:22.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.536 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:22.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:36:22.536 00:36:22.536 --- 10.0.0.1 ping statistics --- 00:36:22.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.536 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=933884 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 933884 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 933884 ']' 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:22.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:22.536 13:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:22.536 [2024-11-25 13:11:02.336107] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:22.536 [2024-11-25 13:11:02.337235] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:36:22.536 [2024-11-25 13:11:02.337287] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:22.798 [2024-11-25 13:11:02.446247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:22.798 [2024-11-25 13:11:02.496753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:22.798 [2024-11-25 13:11:02.496799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:22.798 [2024-11-25 13:11:02.496808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:22.798 [2024-11-25 13:11:02.496815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:22.798 [2024-11-25 13:11:02.496822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:22.798 [2024-11-25 13:11:02.498915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:22.798 [2024-11-25 13:11:02.499081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:22.798 [2024-11-25 13:11:02.499241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:22.798 [2024-11-25 13:11:02.499242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:22.798 [2024-11-25 13:11:02.583199] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:22.798 [2024-11-25 13:11:02.584246] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:22.798 [2024-11-25 13:11:02.584478] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:22.798 [2024-11-25 13:11:02.585142] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:22.798 [2024-11-25 13:11:02.585176] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:23.370 [2024-11-25 13:11:03.192106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:23.370 Malloc0 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.370 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:23.631 [2024-11-25 13:11:03.276333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.631 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.631 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:23.631 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:23.631 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:36:23.631 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:36:23.631 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:23.631 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:23.631 { 00:36:23.631 "params": { 00:36:23.631 "name": "Nvme$subsystem", 00:36:23.631 "trtype": "$TEST_TRANSPORT", 00:36:23.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:23.631 "adrfam": "ipv4", 00:36:23.631 "trsvcid": "$NVMF_PORT", 00:36:23.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:23.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:23.631 "hdgst": ${hdgst:-false}, 00:36:23.631 "ddgst": ${ddgst:-false} 00:36:23.631 }, 00:36:23.631 "method": "bdev_nvme_attach_controller" 00:36:23.631 } 00:36:23.631 EOF 00:36:23.631 )") 00:36:23.631 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:36:23.631 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:36:23.631 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:36:23.631 13:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:23.631 "params": { 00:36:23.631 "name": "Nvme1", 00:36:23.631 "trtype": "tcp", 00:36:23.631 "traddr": "10.0.0.2", 00:36:23.631 "adrfam": "ipv4", 00:36:23.631 "trsvcid": "4420", 00:36:23.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:23.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:23.631 "hdgst": false, 00:36:23.631 "ddgst": false 00:36:23.631 }, 00:36:23.631 "method": "bdev_nvme_attach_controller" 00:36:23.631 }' 00:36:23.631 [2024-11-25 13:11:03.335479] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:36:23.631 [2024-11-25 13:11:03.335549] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934022 ] 00:36:23.631 [2024-11-25 13:11:03.421856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:23.631 [2024-11-25 13:11:03.465715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.631 [2024-11-25 13:11:03.465841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:23.631 [2024-11-25 13:11:03.465844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.891 I/O targets: 00:36:23.892 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:23.892 00:36:23.892 00:36:23.892 CUnit - A unit testing framework for C - Version 2.1-3 00:36:23.892 http://cunit.sourceforge.net/ 00:36:23.892 00:36:23.892 00:36:23.892 Suite: bdevio tests on: Nvme1n1 00:36:23.892 Test: blockdev write read block ...passed 00:36:23.892 Test: blockdev write zeroes read block ...passed 00:36:23.892 Test: blockdev write zeroes read no split ...passed 00:36:23.892 Test: blockdev write zeroes read split ...passed 00:36:23.892 Test: blockdev write zeroes read split partial ...passed 00:36:23.892 Test: blockdev reset ...[2024-11-25 13:11:03.773692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:36:23.892 [2024-11-25 13:11:03.773764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb3530 (9): Bad file descriptor 00:36:24.151 [2024-11-25 13:11:03.826454] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:36:24.151 passed 00:36:24.151 Test: blockdev write read 8 blocks ...passed 00:36:24.151 Test: blockdev write read size > 128k ...passed 00:36:24.151 Test: blockdev write read invalid size ...passed 00:36:24.151 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:24.151 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:24.151 Test: blockdev write read max offset ...passed 00:36:24.151 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:24.151 Test: blockdev writev readv 8 blocks ...passed 00:36:24.151 Test: blockdev writev readv 30 x 1block ...passed 00:36:24.151 Test: blockdev writev readv block ...passed 00:36:24.151 Test: blockdev writev readv size > 128k ...passed 00:36:24.410 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:24.410 Test: blockdev comparev and writev ...[2024-11-25 13:11:04.095242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.410 [2024-11-25 13:11:04.095269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:24.410 [2024-11-25 13:11:04.095281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.410 [2024-11-25 13:11:04.095291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:24.410 [2024-11-25 13:11:04.095845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.410 [2024-11-25 13:11:04.095855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:24.410 [2024-11-25 13:11:04.095869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.410 [2024-11-25 13:11:04.095874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:24.410 [2024-11-25 13:11:04.096414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.410 [2024-11-25 13:11:04.096424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:24.410 [2024-11-25 13:11:04.096434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.410 [2024-11-25 13:11:04.096441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:24.410 [2024-11-25 13:11:04.096966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.410 [2024-11-25 13:11:04.096976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:24.410 [2024-11-25 13:11:04.096986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:24.410 [2024-11-25 13:11:04.096991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:24.410 passed 00:36:24.410 Test: blockdev nvme passthru rw ...passed 00:36:24.410 Test: blockdev nvme passthru vendor specific ...[2024-11-25 13:11:04.181768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:24.410 [2024-11-25 13:11:04.181780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:24.410 [2024-11-25 13:11:04.182132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:24.410 [2024-11-25 13:11:04.182142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:24.410 [2024-11-25 13:11:04.182484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:24.410 [2024-11-25 13:11:04.182492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:24.410 [2024-11-25 13:11:04.182828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:24.410 [2024-11-25 13:11:04.182836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:24.410 passed 00:36:24.410 Test: blockdev nvme admin passthru ...passed 00:36:24.410 Test: blockdev copy ...passed 00:36:24.410 00:36:24.410 Run Summary: Type Total Ran Passed Failed Inactive 00:36:24.410 suites 1 1 n/a 0 0 00:36:24.410 tests 23 23 23 0 0 00:36:24.410 asserts 152 152 152 0 n/a 00:36:24.410 00:36:24.410 Elapsed time = 1.273 seconds 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:24.670 rmmod nvme_tcp 00:36:24.670 rmmod nvme_fabrics 00:36:24.670 rmmod nvme_keyring 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 933884 ']' 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 933884 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 933884 ']' 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 933884 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 933884 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 933884' 00:36:24.670 killing process with pid 933884 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 933884 00:36:24.670 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 933884 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.931 13:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.472 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:27.472 00:36:27.472 real 0m13.027s 00:36:27.472 user 0m9.511s 00:36:27.473 sys 0m7.079s 00:36:27.473 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:27.473 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:27.473 ************************************ 00:36:27.473 END TEST nvmf_bdevio 00:36:27.473 ************************************ 00:36:27.473 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:27.473 00:36:27.473 real 5m9.165s 00:36:27.473 user 10m15.467s 00:36:27.473 sys 2m11.855s 00:36:27.473 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:27.473 13:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:27.473 ************************************ 00:36:27.473 END TEST nvmf_target_core_interrupt_mode 00:36:27.473 ************************************ 00:36:27.473 13:11:06 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:27.473 13:11:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:27.473 13:11:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:27.473 13:11:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:27.473 ************************************ 00:36:27.473 START TEST nvmf_interrupt 00:36:27.473 ************************************ 00:36:27.473 13:11:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:27.473 * Looking for test storage... 00:36:27.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:27.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.473 --rc genhtml_branch_coverage=1 00:36:27.473 --rc genhtml_function_coverage=1 00:36:27.473 --rc genhtml_legend=1 00:36:27.473 --rc geninfo_all_blocks=1 00:36:27.473 --rc geninfo_unexecuted_blocks=1 00:36:27.473 00:36:27.473 ' 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:27.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.473 --rc genhtml_branch_coverage=1 00:36:27.473 --rc genhtml_function_coverage=1 00:36:27.473 --rc genhtml_legend=1 00:36:27.473 --rc geninfo_all_blocks=1 00:36:27.473 --rc geninfo_unexecuted_blocks=1 00:36:27.473 00:36:27.473 ' 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:27.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.473 --rc genhtml_branch_coverage=1 00:36:27.473 --rc genhtml_function_coverage=1 00:36:27.473 --rc genhtml_legend=1 00:36:27.473 --rc geninfo_all_blocks=1 00:36:27.473 --rc geninfo_unexecuted_blocks=1 00:36:27.473 00:36:27.473 ' 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:27.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:27.473 --rc genhtml_branch_coverage=1 00:36:27.473 --rc genhtml_function_coverage=1 00:36:27.473 --rc genhtml_legend=1 00:36:27.473 --rc geninfo_all_blocks=1 00:36:27.473 --rc geninfo_unexecuted_blocks=1 00:36:27.473 00:36:27.473 ' 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:27.473 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:36:27.474 13:11:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:35.612 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:35.613 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:35.613 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:35.613 Found net devices under 0000:31:00.0: cvl_0_0 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:35.613 Found net devices under 0000:31:00.1: cvl_0_1 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:35.613 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:35.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:35.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:36:35.873 00:36:35.873 --- 10.0.0.2 ping statistics --- 00:36:35.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:35.873 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:35.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:35.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:36:35.873 00:36:35.873 --- 10.0.0.1 ping statistics --- 00:36:35.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:35.873 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:35.873 13:11:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=939042 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 939042 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 939042 ']' 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:35.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:35.874 13:11:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:35.874 [2024-11-25 13:11:15.660646] thread.c:2990:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:35.874 [2024-11-25 13:11:15.661784] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:36:35.874 [2024-11-25 13:11:15.661836] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:35.874 [2024-11-25 13:11:15.752622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:36.133 [2024-11-25 13:11:15.793509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:36.133 [2024-11-25 13:11:15.793544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:36.133 [2024-11-25 13:11:15.793552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:36.133 [2024-11-25 13:11:15.793559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:36.133 [2024-11-25 13:11:15.793564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:36.133 [2024-11-25 13:11:15.794785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.133 [2024-11-25 13:11:15.794788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.133 [2024-11-25 13:11:15.850716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:36.133 [2024-11-25 13:11:15.851190] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:36.133 [2024-11-25 13:11:15.851539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:36.704 5000+0 records in 00:36:36.704 5000+0 records out 00:36:36.704 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0190018 s, 539 MB/s 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.704 AIO0 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.704 [2024-11-25 13:11:16.571606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:36.704 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.705 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:36.705 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.705 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.705 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.705 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:36.705 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.705 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.705 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.705 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:36.705 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.705 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:36.965 [2024-11-25 13:11:16.611891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 939042 0 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 939042 0 idle 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=939042 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 939042 -w 256 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 939042 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.25 reactor_0' 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 939042 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.25 reactor_0 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 939042 1 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 939042 1 idle 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=939042 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 939042 -w 256 00:36:36.965 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 939048 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 939048 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=939334 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 939042 0 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 939042 0 busy 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=939042 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:37.227 13:11:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:37.227 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 939042 -w 256 00:36:37.227 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 939042 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0' 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 939042 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.47 reactor_0 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 939042 1 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 939042 1 busy 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=939042 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 939042 -w 256 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 939048 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.30 reactor_1' 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 939048 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.30 reactor_1 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:37.488 13:11:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 939334 00:36:47.490 Initializing NVMe Controllers 00:36:47.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:47.490 Controller IO queue size 256, less than required. 00:36:47.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:47.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:47.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:47.490 Initialization complete. Launching workers. 00:36:47.490 ======================================================== 00:36:47.490 Latency(us) 00:36:47.490 Device Information : IOPS MiB/s Average min max 00:36:47.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16810.30 65.67 15237.88 2537.82 18707.22 00:36:47.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19870.80 77.62 12885.33 7535.60 29995.10 00:36:47.490 ======================================================== 00:36:47.490 Total : 36681.10 143.29 13963.46 2537.82 29995.10 00:36:47.490 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 939042 0 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 939042 0 idle 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=939042 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 939042 -w 256 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 939042 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.26 reactor_0' 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 939042 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.26 reactor_0 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 939042 1 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 939042 1 idle 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=939042 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 939042 -w 256 00:36:47.490 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:47.764 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 939048 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:36:47.764 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 939048 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:36:47.764 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:47.764 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:47.764 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:47.764 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:47.764 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:47.764 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:47.764 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:47.764 13:11:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:47.764 13:11:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:48.350 13:11:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:48.350 13:11:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:36:48.350 13:11:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:48.350 13:11:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:48.350 13:11:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:36:50.266 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:50.266 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:50.266 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:50.266 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:50.266 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:50.266 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:36:50.266 13:11:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 939042 0 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 939042 0 idle 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=939042 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 939042 -w 256 00:36:50.267 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 939042 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.50 reactor_0' 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 939042 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.50 reactor_0 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 939042 1 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 939042 1 idle 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=939042 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 939042 -w 256 00:36:50.529 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:50.790 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 939048 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:36:50.790 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 939048 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:36:50.790 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:50.790 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:50.790 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:50.790 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:50.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.791 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.791 rmmod nvme_tcp 00:36:51.052 rmmod nvme_fabrics 00:36:51.052 rmmod nvme_keyring 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 939042 ']' 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 939042 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 939042 ']' 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 939042 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 939042 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 939042' 00:36:51.052 killing process with pid 939042 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 939042 00:36:51.052 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 939042 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:51.314 13:11:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.228 13:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:53.228 00:36:53.228 real 0m26.132s 00:36:53.228 user 0m40.467s 00:36:53.228 sys 0m10.218s 00:36:53.228 13:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.228 13:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:53.228 ************************************ 00:36:53.228 END TEST nvmf_interrupt 00:36:53.228 ************************************ 00:36:53.228 00:36:53.228 real 31m12.284s 00:36:53.228 user 62m1.486s 00:36:53.228 sys 10m55.062s 00:36:53.228 13:11:33 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.228 13:11:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:53.228 ************************************ 00:36:53.228 END TEST nvmf_tcp 00:36:53.228 ************************************ 00:36:53.228 13:11:33 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:53.228 13:11:33 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:53.228 13:11:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:53.228 13:11:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.228 13:11:33 -- common/autotest_common.sh@10 -- # set +x 00:36:53.490 ************************************ 00:36:53.490 START TEST spdkcli_nvmf_tcp 00:36:53.490 ************************************ 00:36:53.490 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:53.490 * Looking for test storage... 00:36:53.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:53.490 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:53.490 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:36:53.490 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:53.490 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:53.490 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:53.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.491 --rc genhtml_branch_coverage=1 00:36:53.491 --rc genhtml_function_coverage=1 00:36:53.491 --rc genhtml_legend=1 00:36:53.491 --rc geninfo_all_blocks=1 00:36:53.491 --rc geninfo_unexecuted_blocks=1 00:36:53.491 00:36:53.491 ' 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:53.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.491 --rc genhtml_branch_coverage=1 00:36:53.491 --rc genhtml_function_coverage=1 00:36:53.491 --rc genhtml_legend=1 00:36:53.491 --rc geninfo_all_blocks=1 00:36:53.491 --rc geninfo_unexecuted_blocks=1 00:36:53.491 00:36:53.491 ' 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:53.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.491 --rc genhtml_branch_coverage=1 00:36:53.491 --rc genhtml_function_coverage=1 00:36:53.491 --rc genhtml_legend=1 00:36:53.491 --rc geninfo_all_blocks=1 00:36:53.491 --rc geninfo_unexecuted_blocks=1 00:36:53.491 00:36:53.491 ' 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:53.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.491 --rc genhtml_branch_coverage=1 00:36:53.491 --rc genhtml_function_coverage=1 00:36:53.491 --rc genhtml_legend=1 00:36:53.491 --rc geninfo_all_blocks=1 00:36:53.491 --rc geninfo_unexecuted_blocks=1 00:36:53.491 00:36:53.491 ' 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:53.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:53.491 13:11:33 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=942602 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 942602 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 942602 ']' 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:53.786 13:11:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:53.786 [2024-11-25 13:11:33.458625] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:36:53.786 [2024-11-25 13:11:33.458702] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942602 ] 00:36:53.786 [2024-11-25 13:11:33.541266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:53.786 [2024-11-25 13:11:33.584026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:53.786 [2024-11-25 13:11:33.584044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.528 13:11:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:54.528 13:11:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:54.528 13:11:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:54.528 13:11:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:54.528 13:11:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:54.528 13:11:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:54.528 13:11:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:54.528 13:11:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:54.528 13:11:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:54.528 13:11:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:54.528 13:11:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:54.528 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:54.528 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:54.528 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:54.528 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:54.528 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:54.528 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:54.528 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:54.528 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:54.528 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:54.528 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:54.528 ' 00:36:57.828 [2024-11-25 13:11:36.981021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:58.771 [2024-11-25 13:11:38.345414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:01.315 [2024-11-25 13:11:40.881204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:03.228 [2024-11-25 13:11:43.115846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:05.142 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:05.142 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:05.142 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:05.143 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:05.143 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:05.143 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:05.143 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:05.143 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:05.143 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:05.143 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:05.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:05.143 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:05.143 13:11:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:05.143 13:11:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:05.143 13:11:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:05.143 13:11:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:05.143 13:11:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:05.143 13:11:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:05.143 13:11:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:05.143 13:11:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:05.404 13:11:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:05.663 13:11:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:05.663 13:11:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:05.663 13:11:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:05.664 13:11:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:05.664 13:11:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:05.664 13:11:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:05.664 13:11:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:05.664 13:11:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:05.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:05.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:05.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:05.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:05.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:05.664 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:05.664 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:05.664 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:05.664 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:05.664 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:05.664 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:05.664 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:05.664 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:05.664 ' 00:37:10.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:10.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:10.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:10.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:10.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:10.952 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:10.952 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:10.952 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:10.952 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:10.952 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:10.952 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:10.952 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:10.952 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:10.952 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 942602 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 942602 ']' 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 942602 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 942602 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 942602' 00:37:10.952 killing process with pid 942602 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 942602 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 942602 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 942602 ']' 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 942602 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 942602 ']' 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 942602 00:37:10.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (942602) - No such process 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 942602 is not found' 00:37:10.952 Process with pid 942602 is not found 00:37:10.952 13:11:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:10.953 13:11:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:10.953 13:11:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:10.953 00:37:10.953 real 0m17.512s 00:37:10.953 user 0m38.018s 00:37:10.953 sys 0m0.826s 00:37:10.953 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:10.953 13:11:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.953 ************************************ 00:37:10.953 END TEST spdkcli_nvmf_tcp 00:37:10.953 ************************************ 00:37:10.953 13:11:50 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:10.953 13:11:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:10.953 13:11:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:10.953 13:11:50 -- common/autotest_common.sh@10 -- # set +x 00:37:10.953 ************************************ 00:37:10.953 START TEST nvmf_identify_passthru 00:37:10.953 ************************************ 00:37:10.953 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:10.953 * Looking for test storage... 00:37:10.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:10.953 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:10.953 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:10.953 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:37:11.215 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:11.215 13:11:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:37:11.216 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:11.216 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:11.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.216 --rc genhtml_branch_coverage=1 00:37:11.216 --rc genhtml_function_coverage=1 00:37:11.216 --rc genhtml_legend=1 00:37:11.216 --rc geninfo_all_blocks=1 00:37:11.216 --rc geninfo_unexecuted_blocks=1 00:37:11.216 00:37:11.216 ' 00:37:11.216 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:11.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.216 --rc genhtml_branch_coverage=1 00:37:11.216 --rc genhtml_function_coverage=1 00:37:11.216 --rc genhtml_legend=1 00:37:11.216 --rc geninfo_all_blocks=1 00:37:11.216 --rc geninfo_unexecuted_blocks=1 00:37:11.216 00:37:11.216 ' 00:37:11.216 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:11.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.216 --rc genhtml_branch_coverage=1 00:37:11.216 --rc genhtml_function_coverage=1 00:37:11.216 --rc genhtml_legend=1 00:37:11.216 --rc geninfo_all_blocks=1 00:37:11.216 --rc geninfo_unexecuted_blocks=1 00:37:11.216 00:37:11.216 ' 00:37:11.216 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:11.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.216 --rc genhtml_branch_coverage=1 00:37:11.216 --rc genhtml_function_coverage=1 00:37:11.216 --rc genhtml_legend=1 00:37:11.216 --rc geninfo_all_blocks=1 00:37:11.216 --rc geninfo_unexecuted_blocks=1 00:37:11.216 00:37:11.216 ' 00:37:11.216 13:11:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.216 13:11:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.216 13:11:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.216 13:11:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.216 13:11:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:11.216 13:11:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:11.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:11.216 13:11:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.216 13:11:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.216 13:11:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.216 13:11:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.216 13:11:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.216 13:11:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:11.216 13:11:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.216 13:11:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:11.216 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:11.216 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:11.216 13:11:50 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:37:11.216 13:11:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:19.362 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:19.363 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:19.363 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:19.363 Found net devices under 0000:31:00.0: cvl_0_0 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:19.363 Found net devices under 0000:31:00.1: cvl_0_1 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:19.363 13:11:58 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:19.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:19.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:37:19.363 00:37:19.363 --- 10.0.0.2 ping statistics --- 00:37:19.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.363 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:19.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:19.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:37:19.363 00:37:19.363 --- 10.0.0.1 ping statistics --- 00:37:19.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:19.363 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:19.363 13:11:59 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:19.363 13:11:59 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:19.363 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:19.363 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:19.363 13:11:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:19.363 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:19.363 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:37:19.363 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:19.363 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:19.363 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:19.363 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:37:19.363 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:19.363 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:19.363 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:19.624 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:19.624 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:37:19.624 13:11:59 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:37:19.624 13:11:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:37:19.624 13:11:59 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:37:19.624 13:11:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:19.624 13:11:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:19.624 13:11:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:19.884 13:11:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:37:19.884 13:11:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:19.884 13:11:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:19.884 13:11:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:20.456 13:12:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:37:20.456 13:12:00 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:20.456 13:12:00 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:20.456 13:12:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:20.456 13:12:00 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:20.456 13:12:00 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:20.456 13:12:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:20.456 13:12:00 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=950322 00:37:20.456 13:12:00 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:20.456 13:12:00 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:20.456 13:12:00 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 950322 00:37:20.457 13:12:00 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 950322 ']' 00:37:20.457 13:12:00 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:20.457 13:12:00 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:20.457 13:12:00 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:20.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:20.457 13:12:00 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:20.457 13:12:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:20.718 [2024-11-25 13:12:00.359082] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:37:20.718 [2024-11-25 13:12:00.359143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:20.718 [2024-11-25 13:12:00.448522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:20.718 [2024-11-25 13:12:00.487260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:20.718 [2024-11-25 13:12:00.487298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:20.718 [2024-11-25 13:12:00.487306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:20.718 [2024-11-25 13:12:00.487313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:20.718 [2024-11-25 13:12:00.487319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:20.718 [2024-11-25 13:12:00.488894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:20.718 [2024-11-25 13:12:00.489134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.718 [2024-11-25 13:12:00.489134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:20.718 [2024-11-25 13:12:00.488972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:21.290 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:21.290 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:37:21.290 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:21.290 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.290 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:21.290 INFO: Log level set to 20 00:37:21.290 INFO: Requests: 00:37:21.290 { 00:37:21.290 "jsonrpc": "2.0", 00:37:21.290 "method": "nvmf_set_config", 00:37:21.290 "id": 1, 00:37:21.290 "params": { 00:37:21.290 "admin_cmd_passthru": { 00:37:21.290 "identify_ctrlr": true 00:37:21.290 } 00:37:21.290 } 00:37:21.290 } 00:37:21.290 00:37:21.290 INFO: response: 00:37:21.290 { 00:37:21.290 "jsonrpc": "2.0", 00:37:21.290 "id": 1, 00:37:21.290 "result": true 00:37:21.290 } 00:37:21.290 00:37:21.290 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.290 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:21.290 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.290 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:21.290 INFO: Setting log level to 20 00:37:21.290 INFO: Setting log level to 20 00:37:21.290 INFO: Log level set to 20 00:37:21.290 INFO: Log level set to 20 00:37:21.290 INFO: Requests: 00:37:21.290 { 00:37:21.290 "jsonrpc": "2.0", 00:37:21.290 "method": "framework_start_init", 00:37:21.290 "id": 1 00:37:21.290 } 00:37:21.290 00:37:21.290 INFO: Requests: 00:37:21.290 { 00:37:21.290 "jsonrpc": "2.0", 00:37:21.290 "method": "framework_start_init", 00:37:21.290 "id": 1 00:37:21.290 } 00:37:21.290 00:37:21.551 [2024-11-25 13:12:01.221541] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:21.551 INFO: response: 00:37:21.551 { 00:37:21.551 "jsonrpc": "2.0", 00:37:21.551 "id": 1, 00:37:21.551 "result": true 00:37:21.551 } 00:37:21.551 00:37:21.551 INFO: response: 00:37:21.551 { 00:37:21.551 "jsonrpc": "2.0", 00:37:21.551 "id": 1, 00:37:21.551 "result": true 00:37:21.551 } 00:37:21.551 00:37:21.551 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.551 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:21.551 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.552 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:21.552 INFO: Setting log level to 40 00:37:21.552 INFO: Setting log level to 40 00:37:21.552 INFO: Setting log level to 40 00:37:21.552 [2024-11-25 13:12:01.234885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:21.552 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.552 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:21.552 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:21.552 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:21.552 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:37:21.552 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.552 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:21.813 Nvme0n1 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.813 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.813 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.813 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:21.813 [2024-11-25 13:12:01.628111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.813 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:21.813 [ 00:37:21.813 { 00:37:21.813 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:21.813 "subtype": "Discovery", 00:37:21.813 "listen_addresses": [], 00:37:21.813 "allow_any_host": true, 00:37:21.813 "hosts": [] 00:37:21.813 }, 00:37:21.813 { 00:37:21.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:21.813 "subtype": "NVMe", 00:37:21.813 "listen_addresses": [ 00:37:21.813 { 00:37:21.813 "trtype": "TCP", 00:37:21.813 "adrfam": "IPv4", 00:37:21.813 "traddr": "10.0.0.2", 00:37:21.813 "trsvcid": "4420" 00:37:21.813 } 00:37:21.813 ], 00:37:21.813 "allow_any_host": true, 00:37:21.813 "hosts": [], 00:37:21.813 "serial_number": "SPDK00000000000001", 00:37:21.813 "model_number": "SPDK bdev Controller", 00:37:21.813 "max_namespaces": 1, 00:37:21.813 "min_cntlid": 1, 00:37:21.813 "max_cntlid": 65519, 00:37:21.813 "namespaces": [ 00:37:21.813 { 00:37:21.813 "nsid": 1, 00:37:21.813 "bdev_name": "Nvme0n1", 00:37:21.813 "name": "Nvme0n1", 00:37:21.813 "nguid": "3634473052605494002538450000002D", 00:37:21.813 "uuid": "36344730-5260-5494-0025-38450000002d" 00:37:21.813 } 00:37:21.813 ] 00:37:21.813 } 00:37:21.813 ] 00:37:21.813 13:12:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.813 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:21.813 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:21.813 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:22.074 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:37:22.074 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:22.074 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:22.074 13:12:01 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:22.334 13:12:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:37:22.334 13:12:02 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:37:22.335 13:12:02 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:37:22.335 13:12:02 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:22.335 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.335 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:22.335 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.335 13:12:02 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:22.335 13:12:02 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:22.335 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:22.335 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:22.335 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:22.335 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:22.335 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:22.335 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:22.335 rmmod nvme_tcp 00:37:22.335 rmmod nvme_fabrics 00:37:22.335 rmmod nvme_keyring 00:37:22.335 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:22.335 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:22.335 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:22.335 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 950322 ']' 00:37:22.335 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 950322 00:37:22.335 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 950322 ']' 00:37:22.335 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 950322 00:37:22.335 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:37:22.335 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:22.335 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 950322 00:37:22.594 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:22.594 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:22.594 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 950322' 00:37:22.594 killing process with pid 950322 00:37:22.594 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 950322 00:37:22.594 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 950322 00:37:22.594 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:22.594 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:22.594 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:22.594 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:22.594 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:37:22.594 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:22.594 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:37:22.594 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:22.594 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:22.594 13:12:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.594 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:22.594 13:12:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:25.138 13:12:04 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:25.138 00:37:25.138 real 0m13.826s 00:37:25.138 user 0m10.556s 00:37:25.138 sys 0m7.015s 00:37:25.138 13:12:04 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:25.138 13:12:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:25.138 ************************************ 00:37:25.138 END TEST nvmf_identify_passthru 00:37:25.138 ************************************ 00:37:25.138 13:12:04 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:25.138 13:12:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:25.138 13:12:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:25.138 13:12:04 -- common/autotest_common.sh@10 -- # set +x 00:37:25.138 ************************************ 00:37:25.138 START TEST nvmf_dif 00:37:25.138 ************************************ 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:25.138 * Looking for test storage... 00:37:25.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:25.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.138 --rc genhtml_branch_coverage=1 00:37:25.138 --rc genhtml_function_coverage=1 00:37:25.138 --rc genhtml_legend=1 00:37:25.138 --rc geninfo_all_blocks=1 00:37:25.138 --rc geninfo_unexecuted_blocks=1 00:37:25.138 00:37:25.138 ' 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:25.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.138 --rc genhtml_branch_coverage=1 00:37:25.138 --rc genhtml_function_coverage=1 00:37:25.138 --rc genhtml_legend=1 00:37:25.138 --rc geninfo_all_blocks=1 00:37:25.138 --rc geninfo_unexecuted_blocks=1 00:37:25.138 00:37:25.138 ' 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:25.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.138 --rc genhtml_branch_coverage=1 00:37:25.138 --rc genhtml_function_coverage=1 00:37:25.138 --rc genhtml_legend=1 00:37:25.138 --rc geninfo_all_blocks=1 00:37:25.138 --rc geninfo_unexecuted_blocks=1 00:37:25.138 00:37:25.138 ' 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:25.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:25.138 --rc genhtml_branch_coverage=1 00:37:25.138 --rc genhtml_function_coverage=1 00:37:25.138 --rc genhtml_legend=1 00:37:25.138 --rc geninfo_all_blocks=1 00:37:25.138 --rc geninfo_unexecuted_blocks=1 00:37:25.138 00:37:25.138 ' 00:37:25.138 13:12:04 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:25.138 13:12:04 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:25.138 13:12:04 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.138 13:12:04 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.138 13:12:04 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.138 13:12:04 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:25.138 13:12:04 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:25.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:25.138 13:12:04 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:25.138 13:12:04 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:25.138 13:12:04 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:25.138 13:12:04 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:25.138 13:12:04 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:25.138 13:12:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:25.138 13:12:04 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:25.139 13:12:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:33.296 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:33.296 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:33.296 Found net devices under 0000:31:00.0: cvl_0_0 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:33.296 Found net devices under 0000:31:00.1: cvl_0_1 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:33.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:33.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:37:33.296 00:37:33.296 --- 10.0.0.2 ping statistics --- 00:37:33.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.296 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:33.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:33.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:37:33.296 00:37:33.296 --- 10.0.0.1 ping statistics --- 00:37:33.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.296 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:33.296 13:12:12 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:37.502 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:37.502 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:37.502 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:37.502 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:37.502 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:37.502 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:37.503 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:37.503 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:37.503 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:37.503 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:37.503 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:37.503 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:37.503 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:37.503 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:37.503 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:37.503 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:37.503 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:37.503 13:12:17 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:37.503 13:12:17 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:37.503 13:12:17 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:37.503 13:12:17 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:37.503 13:12:17 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:37.503 13:12:17 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:37.503 13:12:17 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:37.503 13:12:17 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:37.503 13:12:17 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:37.503 13:12:17 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.503 13:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:37.503 13:12:17 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=957593 00:37:37.503 13:12:17 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 957593 00:37:37.503 13:12:17 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:37.503 13:12:17 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 957593 ']' 00:37:37.503 13:12:17 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.503 13:12:17 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:37.503 13:12:17 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.503 13:12:17 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:37.503 13:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:37.503 [2024-11-25 13:12:17.141981] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:37:37.503 [2024-11-25 13:12:17.142029] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:37.503 [2024-11-25 13:12:17.227630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.503 [2024-11-25 13:12:17.262138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:37.503 [2024-11-25 13:12:17.262171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:37.503 [2024-11-25 13:12:17.262179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:37.503 [2024-11-25 13:12:17.262186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:37.503 [2024-11-25 13:12:17.262192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:37.503 [2024-11-25 13:12:17.262778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.075 13:12:17 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:38.075 13:12:17 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:37:38.075 13:12:17 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:38.075 13:12:17 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:38.075 13:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:38.075 13:12:17 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:38.075 13:12:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:38.337 13:12:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:38.337 13:12:17 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.337 13:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:38.337 [2024-11-25 13:12:17.982827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:38.337 13:12:17 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.337 13:12:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:38.337 13:12:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:38.337 13:12:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:38.337 13:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:38.337 ************************************ 00:37:38.337 START TEST fio_dif_1_default 00:37:38.337 ************************************ 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:38.337 bdev_null0 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:38.337 [2024-11-25 13:12:18.071189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:38.337 { 00:37:38.337 "params": { 00:37:38.337 "name": "Nvme$subsystem", 00:37:38.337 "trtype": "$TEST_TRANSPORT", 00:37:38.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:38.337 "adrfam": "ipv4", 00:37:38.337 "trsvcid": "$NVMF_PORT", 00:37:38.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:38.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:38.337 "hdgst": ${hdgst:-false}, 00:37:38.337 "ddgst": ${ddgst:-false} 00:37:38.337 }, 00:37:38.337 "method": "bdev_nvme_attach_controller" 00:37:38.337 } 00:37:38.337 EOF 00:37:38.337 )") 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:38.337 "params": { 00:37:38.337 "name": "Nvme0", 00:37:38.337 "trtype": "tcp", 00:37:38.337 "traddr": "10.0.0.2", 00:37:38.337 "adrfam": "ipv4", 00:37:38.337 "trsvcid": "4420", 00:37:38.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:38.337 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:38.337 "hdgst": false, 00:37:38.337 "ddgst": false 00:37:38.337 }, 00:37:38.337 "method": "bdev_nvme_attach_controller" 00:37:38.337 }' 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:38.337 13:12:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:38.907 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:38.907 fio-3.35 00:37:38.907 Starting 1 thread 00:37:51.148 00:37:51.148 filename0: (groupid=0, jobs=1): err= 0: pid=958123: Mon Nov 25 13:12:29 2024 00:37:51.148 read: IOPS=97, BW=388KiB/s (397kB/s)(3888KiB/10019msec) 00:37:51.148 slat (nsec): min=5448, max=54413, avg=6278.59, stdev=2264.75 00:37:51.148 clat (usec): min=40805, max=42962, avg=41209.52, stdev=428.37 00:37:51.148 lat (usec): min=40813, max=42968, avg=41215.80, stdev=428.59 00:37:51.148 clat percentiles (usec): 00:37:51.148 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:51.148 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:51.148 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:37:51.148 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:51.148 | 99.99th=[42730] 00:37:51.148 bw ( KiB/s): min= 384, max= 416, per=99.73%, avg=387.20, stdev= 9.85, samples=20 00:37:51.148 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:37:51.148 lat (msec) : 50=100.00% 00:37:51.148 cpu : usr=93.56%, sys=6.22%, ctx=13, majf=0, minf=258 00:37:51.148 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:51.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:51.148 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:51.148 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:51.148 00:37:51.148 Run status group 0 (all jobs): 00:37:51.148 READ: bw=388KiB/s (397kB/s), 388KiB/s-388KiB/s (397kB/s-397kB/s), io=3888KiB (3981kB), run=10019-10019msec 00:37:51.148 13:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:51.148 13:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:51.148 13:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:51.148 13:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:51.148 13:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:51.148 13:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:51.148 13:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.148 13:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:51.148 13:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.148 13:12:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:51.148 13:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.149 00:37:51.149 real 0m11.329s 00:37:51.149 user 0m23.255s 00:37:51.149 sys 0m0.947s 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:51.149 ************************************ 00:37:51.149 END TEST fio_dif_1_default 00:37:51.149 ************************************ 00:37:51.149 13:12:29 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:51.149 13:12:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:51.149 13:12:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.149 13:12:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:51.149 ************************************ 00:37:51.149 START TEST fio_dif_1_multi_subsystems 00:37:51.149 ************************************ 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:51.149 bdev_null0 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:51.149 [2024-11-25 13:12:29.479458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:51.149 bdev_null1 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:51.149 { 00:37:51.149 "params": { 00:37:51.149 "name": "Nvme$subsystem", 00:37:51.149 "trtype": "$TEST_TRANSPORT", 00:37:51.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.149 "adrfam": "ipv4", 00:37:51.149 "trsvcid": "$NVMF_PORT", 00:37:51.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.149 "hdgst": ${hdgst:-false}, 00:37:51.149 "ddgst": ${ddgst:-false} 00:37:51.149 }, 00:37:51.149 "method": "bdev_nvme_attach_controller" 00:37:51.149 } 00:37:51.149 EOF 00:37:51.149 )") 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:51.149 { 00:37:51.149 "params": { 00:37:51.149 "name": "Nvme$subsystem", 00:37:51.149 "trtype": "$TEST_TRANSPORT", 00:37:51.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.149 "adrfam": "ipv4", 00:37:51.149 "trsvcid": "$NVMF_PORT", 00:37:51.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.149 "hdgst": ${hdgst:-false}, 00:37:51.149 "ddgst": ${ddgst:-false} 00:37:51.149 }, 00:37:51.149 "method": "bdev_nvme_attach_controller" 00:37:51.149 } 00:37:51.149 EOF 00:37:51.149 )") 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:51.149 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:51.149 "params": { 00:37:51.149 "name": "Nvme0", 00:37:51.149 "trtype": "tcp", 00:37:51.149 "traddr": "10.0.0.2", 00:37:51.149 "adrfam": "ipv4", 00:37:51.149 "trsvcid": "4420", 00:37:51.149 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:51.149 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:51.149 "hdgst": false, 00:37:51.149 "ddgst": false 00:37:51.149 }, 00:37:51.149 "method": "bdev_nvme_attach_controller" 00:37:51.149 },{ 00:37:51.149 "params": { 00:37:51.149 "name": "Nvme1", 00:37:51.149 "trtype": "tcp", 00:37:51.149 "traddr": "10.0.0.2", 00:37:51.149 "adrfam": "ipv4", 00:37:51.149 "trsvcid": "4420", 00:37:51.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:51.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:51.150 "hdgst": false, 00:37:51.150 "ddgst": false 00:37:51.150 }, 00:37:51.150 "method": "bdev_nvme_attach_controller" 00:37:51.150 }' 00:37:51.150 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:51.150 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:51.150 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:51.150 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:51.150 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:51.150 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:51.150 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:51.150 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:51.150 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:51.150 13:12:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:51.150 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:51.150 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:51.150 fio-3.35 00:37:51.150 Starting 2 threads 00:38:01.149 00:38:01.149 filename0: (groupid=0, jobs=1): err= 0: pid=960483: Mon Nov 25 13:12:40 2024 00:38:01.149 read: IOPS=189, BW=758KiB/s (776kB/s)(7600KiB/10023msec) 00:38:01.149 slat (nsec): min=5466, max=33250, avg=6315.50, stdev=1810.36 00:38:01.149 clat (usec): min=623, max=43038, avg=21082.51, stdev=20193.79 00:38:01.149 lat (usec): min=629, max=43044, avg=21088.83, stdev=20193.73 00:38:01.149 clat percentiles (usec): 00:38:01.149 | 1.00th=[ 783], 5.00th=[ 889], 10.00th=[ 914], 20.00th=[ 930], 00:38:01.149 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 2024], 60.00th=[41157], 00:38:01.149 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:38:01.149 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:38:01.150 | 99.99th=[43254] 00:38:01.150 bw ( KiB/s): min= 704, max= 768, per=50.21%, avg=758.40, stdev=21.02, samples=20 00:38:01.150 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:38:01.150 lat (usec) : 750=0.84%, 1000=47.53% 00:38:01.150 lat (msec) : 2=1.53%, 4=0.21%, 50=49.89% 00:38:01.150 cpu : usr=95.33%, sys=4.45%, ctx=19, majf=0, minf=132 00:38:01.150 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:01.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:01.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:01.150 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:01.150 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:01.150 filename1: (groupid=0, jobs=1): err= 0: pid=960484: Mon Nov 25 13:12:40 2024 00:38:01.150 read: IOPS=188, BW=752KiB/s (771kB/s)(7552KiB/10036msec) 00:38:01.150 slat (nsec): min=5460, max=32482, avg=6253.29, stdev=1353.40 00:38:01.150 clat (usec): min=689, max=43035, avg=21244.69, stdev=20225.21 00:38:01.150 lat (usec): min=697, max=43044, avg=21250.95, stdev=20225.20 00:38:01.150 clat percentiles (usec): 00:38:01.150 | 1.00th=[ 865], 5.00th=[ 898], 10.00th=[ 914], 20.00th=[ 930], 00:38:01.150 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[41157], 60.00th=[41157], 00:38:01.150 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:38:01.150 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:38:01.150 | 99.99th=[43254] 00:38:01.150 bw ( KiB/s): min= 672, max= 768, per=49.88%, avg=753.60, stdev=26.42, samples=20 00:38:01.150 iops : min= 168, max= 192, avg=188.40, stdev= 6.60, samples=20 00:38:01.150 lat (usec) : 750=0.21%, 1000=46.98% 00:38:01.150 lat (msec) : 2=2.60%, 50=50.21% 00:38:01.150 cpu : usr=95.06%, sys=4.73%, ctx=12, majf=0, minf=131 00:38:01.150 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:01.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:01.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:01.150 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:01.150 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:01.150 00:38:01.150 Run status group 0 (all jobs): 00:38:01.150 READ: bw=1510KiB/s (1546kB/s), 752KiB/s-758KiB/s (771kB/s-776kB/s), io=14.8MiB (15.5MB), run=10023-10036msec 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.150 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:01.412 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.412 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:01.412 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.412 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:01.412 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.412 00:38:01.412 real 0m11.624s 00:38:01.412 user 0m37.592s 00:38:01.412 sys 0m1.238s 00:38:01.412 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:01.412 13:12:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:01.412 ************************************ 00:38:01.412 END TEST fio_dif_1_multi_subsystems 00:38:01.412 ************************************ 00:38:01.412 13:12:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:01.412 13:12:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:01.412 13:12:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:01.412 13:12:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:01.412 ************************************ 00:38:01.412 START TEST fio_dif_rand_params 00:38:01.412 ************************************ 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:01.412 bdev_null0 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:01.412 [2024-11-25 13:12:41.188107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:01.412 13:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:01.412 { 00:38:01.412 "params": { 00:38:01.412 "name": "Nvme$subsystem", 00:38:01.412 "trtype": "$TEST_TRANSPORT", 00:38:01.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:01.413 "adrfam": "ipv4", 00:38:01.413 "trsvcid": "$NVMF_PORT", 00:38:01.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:01.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:01.413 "hdgst": ${hdgst:-false}, 00:38:01.413 "ddgst": ${ddgst:-false} 00:38:01.413 }, 00:38:01.413 "method": "bdev_nvme_attach_controller" 00:38:01.413 } 00:38:01.413 EOF 00:38:01.413 )") 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:01.413 "params": { 00:38:01.413 "name": "Nvme0", 00:38:01.413 "trtype": "tcp", 00:38:01.413 "traddr": "10.0.0.2", 00:38:01.413 "adrfam": "ipv4", 00:38:01.413 "trsvcid": "4420", 00:38:01.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:01.413 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:01.413 "hdgst": false, 00:38:01.413 "ddgst": false 00:38:01.413 }, 00:38:01.413 "method": "bdev_nvme_attach_controller" 00:38:01.413 }' 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:01.413 13:12:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:01.999 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:01.999 ... 00:38:01.999 fio-3.35 00:38:01.999 Starting 3 threads 00:38:08.582 00:38:08.582 filename0: (groupid=0, jobs=1): err= 0: pid=962835: Mon Nov 25 13:12:47 2024 00:38:08.582 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(142MiB/5046msec) 00:38:08.582 slat (nsec): min=5531, max=32576, avg=7533.46, stdev=1736.70 00:38:08.582 clat (usec): min=5711, max=54927, avg=13240.90, stdev=7989.79 00:38:08.582 lat (usec): min=5720, max=54936, avg=13248.43, stdev=7989.86 00:38:08.582 clat percentiles (usec): 00:38:08.582 | 1.00th=[ 6652], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[10159], 00:38:08.582 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:38:08.582 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14222], 95.00th=[15401], 00:38:08.582 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53740], 99.95th=[54789], 00:38:08.582 | 99.99th=[54789] 00:38:08.582 bw ( KiB/s): min=24064, max=33536, per=32.55%, avg=29107.20, stdev=2921.47, samples=10 00:38:08.582 iops : min= 188, max= 262, avg=227.40, stdev=22.82, samples=10 00:38:08.582 lat (msec) : 10=18.79%, 20=76.82%, 50=2.72%, 100=1.67% 00:38:08.582 cpu : usr=95.22%, sys=4.50%, ctx=8, majf=0, minf=107 00:38:08.582 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:08.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.582 issued rwts: total=1139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:08.582 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:08.582 filename0: (groupid=0, jobs=1): err= 0: pid=962836: Mon Nov 25 13:12:47 2024 00:38:08.582 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(149MiB/5007msec) 00:38:08.582 slat (nsec): min=5497, max=45018, avg=7560.88, stdev=2063.37 00:38:08.582 clat (usec): min=5710, max=55221, avg=12619.88, stdev=7917.17 00:38:08.582 lat (usec): min=5716, max=55230, avg=12627.44, stdev=7917.20 00:38:08.582 clat percentiles (usec): 00:38:08.582 | 1.00th=[ 6128], 5.00th=[ 7177], 10.00th=[ 8356], 20.00th=[ 9634], 00:38:08.582 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11600], 00:38:08.582 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14091], 95.00th=[15533], 00:38:08.582 | 99.00th=[51643], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:38:08.582 | 99.99th=[55313] 00:38:08.582 bw ( KiB/s): min=20736, max=35072, per=33.98%, avg=30387.20, stdev=4597.00, samples=10 00:38:08.582 iops : min= 162, max= 274, avg=237.40, stdev=35.91, samples=10 00:38:08.582 lat (msec) : 10=26.41%, 20=69.55%, 50=2.10%, 100=1.93% 00:38:08.582 cpu : usr=93.93%, sys=5.81%, ctx=11, majf=0, minf=115 00:38:08.582 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:08.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.582 issued rwts: total=1189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:08.582 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:08.582 filename0: (groupid=0, jobs=1): err= 0: pid=962837: Mon Nov 25 13:12:47 2024 00:38:08.582 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(150MiB/5045msec) 00:38:08.582 slat (nsec): min=5555, max=51447, avg=8148.29, stdev=2236.11 00:38:08.582 clat (usec): min=5170, max=55804, avg=12596.95, stdev=6546.61 00:38:08.582 lat (usec): min=5179, max=55813, avg=12605.10, stdev=6546.79 00:38:08.582 clat percentiles (usec): 00:38:08.582 | 1.00th=[ 5997], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[ 9765], 00:38:08.582 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11731], 60.00th=[12387], 00:38:08.582 | 70.00th=[13173], 80.00th=[13829], 90.00th=[14746], 95.00th=[15795], 00:38:08.582 | 99.00th=[51119], 99.50th=[52691], 99.90th=[55837], 99.95th=[55837], 00:38:08.582 | 99.99th=[55837] 00:38:08.582 bw ( KiB/s): min=16160, max=36352, per=34.22%, avg=30595.20, stdev=5618.01, samples=10 00:38:08.582 iops : min= 126, max= 284, avg=239.00, stdev=43.96, samples=10 00:38:08.582 lat (msec) : 10=22.89%, 20=74.44%, 50=1.59%, 100=1.09% 00:38:08.582 cpu : usr=94.29%, sys=5.43%, ctx=13, majf=0, minf=98 00:38:08.582 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:08.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.582 issued rwts: total=1197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:08.582 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:08.582 00:38:08.582 Run status group 0 (all jobs): 00:38:08.582 READ: bw=87.3MiB/s (91.6MB/s), 28.2MiB/s-29.7MiB/s (29.6MB/s-31.1MB/s), io=441MiB (462MB), run=5007-5046msec 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:08.582 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 bdev_null0 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 [2024-11-25 13:12:47.564101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 bdev_null1 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 bdev_null2 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:08.583 { 00:38:08.583 "params": { 00:38:08.583 "name": "Nvme$subsystem", 00:38:08.583 "trtype": "$TEST_TRANSPORT", 00:38:08.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:08.583 "adrfam": "ipv4", 00:38:08.583 "trsvcid": "$NVMF_PORT", 00:38:08.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:08.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:08.583 "hdgst": ${hdgst:-false}, 00:38:08.583 "ddgst": ${ddgst:-false} 00:38:08.583 }, 00:38:08.583 "method": "bdev_nvme_attach_controller" 00:38:08.583 } 00:38:08.583 EOF 00:38:08.583 )") 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:08.583 { 00:38:08.583 "params": { 00:38:08.583 "name": "Nvme$subsystem", 00:38:08.583 "trtype": "$TEST_TRANSPORT", 00:38:08.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:08.583 "adrfam": "ipv4", 00:38:08.583 "trsvcid": "$NVMF_PORT", 00:38:08.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:08.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:08.583 "hdgst": ${hdgst:-false}, 00:38:08.583 "ddgst": ${ddgst:-false} 00:38:08.583 }, 00:38:08.583 "method": "bdev_nvme_attach_controller" 00:38:08.583 } 00:38:08.583 EOF 00:38:08.583 )") 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:08.583 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:08.583 { 00:38:08.583 "params": { 00:38:08.583 "name": "Nvme$subsystem", 00:38:08.583 "trtype": "$TEST_TRANSPORT", 00:38:08.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:08.583 "adrfam": "ipv4", 00:38:08.583 "trsvcid": "$NVMF_PORT", 00:38:08.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:08.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:08.583 "hdgst": ${hdgst:-false}, 00:38:08.583 "ddgst": ${ddgst:-false} 00:38:08.583 }, 00:38:08.583 "method": "bdev_nvme_attach_controller" 00:38:08.583 } 00:38:08.583 EOF 00:38:08.584 )") 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:08.584 "params": { 00:38:08.584 "name": "Nvme0", 00:38:08.584 "trtype": "tcp", 00:38:08.584 "traddr": "10.0.0.2", 00:38:08.584 "adrfam": "ipv4", 00:38:08.584 "trsvcid": "4420", 00:38:08.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:08.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:08.584 "hdgst": false, 00:38:08.584 "ddgst": false 00:38:08.584 }, 00:38:08.584 "method": "bdev_nvme_attach_controller" 00:38:08.584 },{ 00:38:08.584 "params": { 00:38:08.584 "name": "Nvme1", 00:38:08.584 "trtype": "tcp", 00:38:08.584 "traddr": "10.0.0.2", 00:38:08.584 "adrfam": "ipv4", 00:38:08.584 "trsvcid": "4420", 00:38:08.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:08.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:08.584 "hdgst": false, 00:38:08.584 "ddgst": false 00:38:08.584 }, 00:38:08.584 "method": "bdev_nvme_attach_controller" 00:38:08.584 },{ 00:38:08.584 "params": { 00:38:08.584 "name": "Nvme2", 00:38:08.584 "trtype": "tcp", 00:38:08.584 "traddr": "10.0.0.2", 00:38:08.584 "adrfam": "ipv4", 00:38:08.584 "trsvcid": "4420", 00:38:08.584 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:08.584 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:08.584 "hdgst": false, 00:38:08.584 "ddgst": false 00:38:08.584 }, 00:38:08.584 "method": "bdev_nvme_attach_controller" 00:38:08.584 }' 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:08.584 13:12:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:08.584 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:08.584 ... 00:38:08.584 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:08.584 ... 00:38:08.584 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:08.584 ... 00:38:08.584 fio-3.35 00:38:08.584 Starting 24 threads 00:38:20.920 00:38:20.920 filename0: (groupid=0, jobs=1): err= 0: pid=964339: Mon Nov 25 13:12:59 2024 00:38:20.920 read: IOPS=641, BW=2568KiB/s (2629kB/s)(25.1MiB/10020msec) 00:38:20.920 slat (nsec): min=5475, max=73033, avg=8030.42, stdev=5349.86 00:38:20.920 clat (usec): min=2821, max=34607, avg=24854.90, stdev=5931.67 00:38:20.920 lat (usec): min=2839, max=34614, avg=24862.93, stdev=5932.49 00:38:20.920 clat percentiles (usec): 00:38:20.920 | 1.00th=[10028], 5.00th=[18220], 10.00th=[18744], 20.00th=[19792], 00:38:20.920 | 30.00th=[20579], 40.00th=[21365], 50.00th=[24773], 60.00th=[26346], 00:38:20.920 | 70.00th=[27657], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:38:20.920 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:38:20.920 | 99.99th=[34866] 00:38:20.920 bw ( KiB/s): min= 1920, max= 2944, per=5.41%, avg=2566.40, stdev=367.89, samples=20 00:38:20.920 iops : min= 480, max= 736, avg=641.60, stdev=91.97, samples=20 00:38:20.920 lat (msec) : 4=0.50%, 10=0.53%, 20=20.85%, 50=78.12% 00:38:20.920 cpu : usr=98.84%, sys=0.84%, ctx=19, majf=0, minf=44 00:38:20.920 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:20.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.920 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.920 issued rwts: total=6432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.920 filename0: (groupid=0, jobs=1): err= 0: pid=964340: Mon Nov 25 13:12:59 2024 00:38:20.920 read: IOPS=486, BW=1944KiB/s (1991kB/s)(19.0MiB/10008msec) 00:38:20.920 slat (usec): min=5, max=128, avg=27.50, stdev=22.02 00:38:20.920 clat (usec): min=8004, max=40331, avg=32684.25, stdev=2088.12 00:38:20.920 lat (usec): min=8016, max=40338, avg=32711.75, stdev=2089.12 00:38:20.920 clat percentiles (usec): 00:38:20.920 | 1.00th=[21627], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:20.920 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:38:20.920 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:38:20.920 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36963], 99.95th=[40109], 00:38:20.920 | 99.99th=[40109] 00:38:20.920 bw ( KiB/s): min= 1916, max= 2176, per=4.10%, avg=1946.74, stdev=68.61, samples=19 00:38:20.920 iops : min= 479, max= 544, avg=486.68, stdev=17.15, samples=19 00:38:20.920 lat (msec) : 10=0.14%, 20=0.66%, 50=99.20% 00:38:20.920 cpu : usr=98.67%, sys=0.81%, ctx=79, majf=0, minf=39 00:38:20.920 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:20.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.920 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.920 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.920 filename0: (groupid=0, jobs=1): err= 0: pid=964341: Mon Nov 25 13:12:59 2024 00:38:20.920 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10010msec) 00:38:20.920 slat (usec): min=5, max=106, avg=18.84, stdev=16.81 00:38:20.920 clat (usec): min=13628, max=67557, avg=32343.87, stdev=3742.55 00:38:20.920 lat (usec): min=13634, max=67572, avg=32362.71, stdev=3744.53 00:38:20.920 clat percentiles (usec): 00:38:20.920 | 1.00th=[20055], 5.00th=[23987], 10.00th=[28443], 20.00th=[32113], 00:38:20.920 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[33162], 00:38:20.920 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:38:20.920 | 99.00th=[41157], 99.50th=[45351], 99.90th=[67634], 99.95th=[67634], 00:38:20.920 | 99.99th=[67634] 00:38:20.920 bw ( KiB/s): min= 1792, max= 2208, per=4.13%, avg=1960.42, stdev=102.92, samples=19 00:38:20.920 iops : min= 448, max= 552, avg=490.11, stdev=25.73, samples=19 00:38:20.920 lat (msec) : 20=1.01%, 50=98.66%, 100=0.32% 00:38:20.920 cpu : usr=98.87%, sys=0.77%, ctx=47, majf=0, minf=28 00:38:20.920 IO depths : 1=4.3%, 2=8.7%, 4=19.1%, 8=59.2%, 16=8.8%, 32=0.0%, >=64=0.0% 00:38:20.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.920 complete : 0=0.0%, 4=92.3%, 8=2.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.920 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.920 filename0: (groupid=0, jobs=1): err= 0: pid=964342: Mon Nov 25 13:12:59 2024 00:38:20.920 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10023msec) 00:38:20.920 slat (nsec): min=5493, max=87323, avg=14186.59, stdev=12298.48 00:38:20.920 clat (usec): min=9609, max=36768, avg=32634.44, stdev=2720.17 00:38:20.920 lat (usec): min=9617, max=36778, avg=32648.62, stdev=2718.74 00:38:20.920 clat percentiles (usec): 00:38:20.920 | 1.00th=[15401], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:38:20.920 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33424], 00:38:20.920 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:38:20.920 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:38:20.920 | 99.99th=[36963] 00:38:20.920 bw ( KiB/s): min= 1792, max= 2304, per=4.12%, avg=1951.80, stdev=100.73, samples=20 00:38:20.920 iops : min= 448, max= 576, avg=487.95, stdev=25.18, samples=20 00:38:20.920 lat (msec) : 10=0.04%, 20=1.92%, 50=98.04% 00:38:20.920 cpu : usr=98.86%, sys=0.74%, ctx=41, majf=0, minf=42 00:38:20.920 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:20.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.920 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.920 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.920 filename0: (groupid=0, jobs=1): err= 0: pid=964344: Mon Nov 25 13:12:59 2024 00:38:20.920 read: IOPS=496, BW=1984KiB/s (2032kB/s)(19.4MiB/10008msec) 00:38:20.920 slat (usec): min=5, max=104, avg=23.68, stdev=17.57 00:38:20.920 clat (usec): min=13066, max=53495, avg=32043.03, stdev=3861.69 00:38:20.920 lat (usec): min=13095, max=53510, avg=32066.71, stdev=3864.94 00:38:20.920 clat percentiles (usec): 00:38:20.920 | 1.00th=[18744], 5.00th=[23987], 10.00th=[27132], 20.00th=[32113], 00:38:20.920 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:38:20.920 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:38:20.920 | 99.00th=[44303], 99.50th=[47449], 99.90th=[53216], 99.95th=[53740], 00:38:20.920 | 99.99th=[53740] 00:38:20.920 bw ( KiB/s): min= 1795, max= 2336, per=4.17%, avg=1975.74, stdev=124.19, samples=19 00:38:20.920 iops : min= 448, max= 584, avg=493.89, stdev=31.11, samples=19 00:38:20.920 lat (msec) : 20=2.40%, 50=97.28%, 100=0.32% 00:38:20.920 cpu : usr=98.99%, sys=0.69%, ctx=15, majf=0, minf=30 00:38:20.920 IO depths : 1=4.5%, 2=9.1%, 4=19.6%, 8=58.0%, 16=8.7%, 32=0.0%, >=64=0.0% 00:38:20.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.920 complete : 0=0.0%, 4=92.8%, 8=2.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.920 issued rwts: total=4964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.920 filename0: (groupid=0, jobs=1): err= 0: pid=964345: Mon Nov 25 13:12:59 2024 00:38:20.920 read: IOPS=482, BW=1929KiB/s (1976kB/s)(18.9MiB/10014msec) 00:38:20.920 slat (usec): min=5, max=134, avg=31.19, stdev=17.96 00:38:20.921 clat (usec): min=14882, max=59941, avg=32883.52, stdev=2129.30 00:38:20.921 lat (usec): min=14893, max=59957, avg=32914.70, stdev=2129.20 00:38:20.921 clat percentiles (usec): 00:38:20.921 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:20.921 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32900], 60.00th=[33162], 00:38:20.921 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:38:20.921 | 99.00th=[35390], 99.50th=[36439], 99.90th=[60031], 99.95th=[60031], 00:38:20.921 | 99.99th=[60031] 00:38:20.921 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1926.53, stdev=51.83, samples=19 00:38:20.921 iops : min= 448, max= 512, avg=481.63, stdev=12.96, samples=19 00:38:20.921 lat (msec) : 20=0.29%, 50=99.38%, 100=0.33% 00:38:20.921 cpu : usr=98.96%, sys=0.61%, ctx=40, majf=0, minf=41 00:38:20.921 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:20.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 issued rwts: total=4830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.921 filename0: (groupid=0, jobs=1): err= 0: pid=964346: Mon Nov 25 13:12:59 2024 00:38:20.921 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10015msec) 00:38:20.921 slat (usec): min=5, max=124, avg=33.05, stdev=19.22 00:38:20.921 clat (usec): min=16997, max=36903, avg=32736.42, stdev=1420.26 00:38:20.921 lat (usec): min=17005, max=36924, avg=32769.47, stdev=1422.17 00:38:20.921 clat percentiles (usec): 00:38:20.921 | 1.00th=[29230], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:20.921 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[33162], 00:38:20.921 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:38:20.921 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:38:20.921 | 99.99th=[36963] 00:38:20.921 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1933.47, stdev=58.73, samples=19 00:38:20.921 iops : min= 448, max= 512, avg=483.37, stdev=14.68, samples=19 00:38:20.921 lat (msec) : 20=0.33%, 50=99.67% 00:38:20.921 cpu : usr=98.48%, sys=0.93%, ctx=101, majf=0, minf=28 00:38:20.921 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:20.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.921 filename0: (groupid=0, jobs=1): err= 0: pid=964347: Mon Nov 25 13:12:59 2024 00:38:20.921 read: IOPS=486, BW=1947KiB/s (1994kB/s)(19.0MiB/10004msec) 00:38:20.921 slat (usec): min=5, max=104, avg=18.47, stdev=14.82 00:38:20.921 clat (usec): min=11070, max=49245, avg=32702.73, stdev=2605.35 00:38:20.921 lat (usec): min=11084, max=49254, avg=32721.19, stdev=2605.45 00:38:20.921 clat percentiles (usec): 00:38:20.921 | 1.00th=[20841], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:38:20.921 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:38:20.921 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:38:20.921 | 99.00th=[36963], 99.50th=[43254], 99.90th=[49021], 99.95th=[49021], 00:38:20.921 | 99.99th=[49021] 00:38:20.921 bw ( KiB/s): min= 1920, max= 2224, per=4.11%, avg=1949.47, stdev=77.70, samples=19 00:38:20.921 iops : min= 480, max= 556, avg=487.37, stdev=19.43, samples=19 00:38:20.921 lat (msec) : 20=0.86%, 50=99.14% 00:38:20.921 cpu : usr=98.45%, sys=0.95%, ctx=185, majf=0, minf=34 00:38:20.921 IO depths : 1=5.8%, 2=11.8%, 4=24.2%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:38:20.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.921 filename1: (groupid=0, jobs=1): err= 0: pid=964348: Mon Nov 25 13:12:59 2024 00:38:20.921 read: IOPS=496, BW=1986KiB/s (2033kB/s)(19.4MiB/10008msec) 00:38:20.921 slat (usec): min=5, max=101, avg=24.38, stdev=18.05 00:38:20.921 clat (usec): min=13265, max=57208, avg=32025.79, stdev=4556.90 00:38:20.921 lat (usec): min=13281, max=57215, avg=32050.17, stdev=4560.01 00:38:20.921 clat percentiles (usec): 00:38:20.921 | 1.00th=[19268], 5.00th=[22414], 10.00th=[26346], 20.00th=[31851], 00:38:20.921 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:38:20.921 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[37487], 00:38:20.921 | 99.00th=[46924], 99.50th=[52691], 99.90th=[56361], 99.95th=[57410], 00:38:20.921 | 99.99th=[57410] 00:38:20.921 bw ( KiB/s): min= 1795, max= 2144, per=4.17%, avg=1977.42, stdev=94.83, samples=19 00:38:20.921 iops : min= 448, max= 536, avg=494.32, stdev=23.79, samples=19 00:38:20.921 lat (msec) : 20=1.73%, 50=97.46%, 100=0.81% 00:38:20.921 cpu : usr=99.00%, sys=0.66%, ctx=17, majf=0, minf=34 00:38:20.921 IO depths : 1=3.8%, 2=7.7%, 4=17.8%, 8=61.5%, 16=9.2%, 32=0.0%, >=64=0.0% 00:38:20.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 complete : 0=0.0%, 4=92.2%, 8=2.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 issued rwts: total=4968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.921 filename1: (groupid=0, jobs=1): err= 0: pid=964349: Mon Nov 25 13:12:59 2024 00:38:20.921 read: IOPS=509, BW=2036KiB/s (2085kB/s)(19.9MiB/10022msec) 00:38:20.921 slat (nsec): min=5480, max=89836, avg=10914.53, stdev=9341.38 00:38:20.921 clat (usec): min=8972, max=50487, avg=31349.48, stdev=5140.41 00:38:20.921 lat (usec): min=8990, max=50507, avg=31360.40, stdev=5141.50 00:38:20.921 clat percentiles (usec): 00:38:20.921 | 1.00th=[14615], 5.00th=[20841], 10.00th=[23462], 20.00th=[28443], 00:38:20.921 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:38:20.921 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[38536], 00:38:20.921 | 99.00th=[44827], 99.50th=[46400], 99.90th=[49546], 99.95th=[50594], 00:38:20.921 | 99.99th=[50594] 00:38:20.921 bw ( KiB/s): min= 1872, max= 2608, per=4.29%, avg=2034.15, stdev=180.28, samples=20 00:38:20.921 iops : min= 468, max= 652, avg=508.50, stdev=45.07, samples=20 00:38:20.921 lat (msec) : 10=0.06%, 20=3.27%, 50=96.59%, 100=0.08% 00:38:20.921 cpu : usr=98.81%, sys=0.87%, ctx=17, majf=0, minf=39 00:38:20.921 IO depths : 1=2.2%, 2=4.8%, 4=14.6%, 8=68.0%, 16=10.4%, 32=0.0%, >=64=0.0% 00:38:20.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 complete : 0=0.0%, 4=91.3%, 8=3.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 issued rwts: total=5102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.921 filename1: (groupid=0, jobs=1): err= 0: pid=964350: Mon Nov 25 13:12:59 2024 00:38:20.921 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10013msec) 00:38:20.921 slat (usec): min=5, max=119, avg=33.59, stdev=19.47 00:38:20.921 clat (usec): min=13723, max=60013, avg=32827.35, stdev=2197.54 00:38:20.921 lat (usec): min=13732, max=60030, avg=32860.94, stdev=2197.71 00:38:20.921 clat percentiles (usec): 00:38:20.921 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:20.921 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[33162], 00:38:20.921 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:38:20.921 | 99.00th=[35390], 99.50th=[36439], 99.90th=[60031], 99.95th=[60031], 00:38:20.921 | 99.99th=[60031] 00:38:20.921 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1926.53, stdev=51.83, samples=19 00:38:20.921 iops : min= 448, max= 512, avg=481.63, stdev=12.96, samples=19 00:38:20.921 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:38:20.921 cpu : usr=98.84%, sys=0.72%, ctx=83, majf=0, minf=32 00:38:20.921 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:20.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.921 filename1: (groupid=0, jobs=1): err= 0: pid=964351: Mon Nov 25 13:12:59 2024 00:38:20.921 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10030msec) 00:38:20.921 slat (usec): min=5, max=124, avg=20.00, stdev=18.92 00:38:20.921 clat (usec): min=2996, max=36630, avg=32388.49, stdev=3797.83 00:38:20.921 lat (usec): min=3016, max=36639, avg=32408.49, stdev=3796.76 00:38:20.921 clat percentiles (usec): 00:38:20.921 | 1.00th=[10028], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:20.921 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:38:20.921 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:38:20.921 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:38:20.921 | 99.99th=[36439] 00:38:20.921 bw ( KiB/s): min= 1916, max= 2560, per=4.14%, avg=1964.60, stdev=145.57, samples=20 00:38:20.921 iops : min= 479, max= 640, avg=491.15, stdev=36.39, samples=20 00:38:20.921 lat (msec) : 4=0.32%, 10=0.69%, 20=1.56%, 50=97.42% 00:38:20.921 cpu : usr=98.94%, sys=0.73%, ctx=16, majf=0, minf=35 00:38:20.921 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:20.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.921 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.921 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.921 filename1: (groupid=0, jobs=1): err= 0: pid=964353: Mon Nov 25 13:12:59 2024 00:38:20.921 read: IOPS=481, BW=1926KiB/s (1972kB/s)(18.8MiB/10002msec) 00:38:20.921 slat (nsec): min=5492, max=94792, avg=21871.76, stdev=15919.01 00:38:20.921 clat (usec): min=18192, max=65051, avg=33030.15, stdev=2417.37 00:38:20.921 lat (usec): min=18201, max=65067, avg=33052.03, stdev=2416.09 00:38:20.921 clat percentiles (usec): 00:38:20.921 | 1.00th=[30540], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:20.921 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:38:20.921 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:38:20.921 | 99.00th=[35914], 99.50th=[46400], 99.90th=[65274], 99.95th=[65274], 00:38:20.921 | 99.99th=[65274] 00:38:20.921 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1919.79, stdev=42.68, samples=19 00:38:20.921 iops : min= 448, max= 512, avg=479.95, stdev=10.67, samples=19 00:38:20.921 lat (msec) : 20=0.54%, 50=99.13%, 100=0.33% 00:38:20.921 cpu : usr=99.02%, sys=0.65%, ctx=18, majf=0, minf=29 00:38:20.921 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:20.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.922 filename1: (groupid=0, jobs=1): err= 0: pid=964354: Mon Nov 25 13:12:59 2024 00:38:20.922 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10013msec) 00:38:20.922 slat (usec): min=5, max=130, avg=24.65, stdev=18.92 00:38:20.922 clat (usec): min=20139, max=46327, avg=32942.95, stdev=1533.98 00:38:20.922 lat (usec): min=20183, max=46334, avg=32967.59, stdev=1531.77 00:38:20.922 clat percentiles (usec): 00:38:20.922 | 1.00th=[29754], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:20.922 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:38:20.922 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:38:20.922 | 99.00th=[36439], 99.50th=[40109], 99.90th=[46400], 99.95th=[46400], 00:38:20.922 | 99.99th=[46400] 00:38:20.922 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1926.74, stdev=51.80, samples=19 00:38:20.922 iops : min= 448, max= 512, avg=481.68, stdev=12.95, samples=19 00:38:20.922 lat (msec) : 50=100.00% 00:38:20.922 cpu : usr=99.03%, sys=0.63%, ctx=19, majf=0, minf=28 00:38:20.922 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:20.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.922 filename1: (groupid=0, jobs=1): err= 0: pid=964355: Mon Nov 25 13:12:59 2024 00:38:20.922 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10016msec) 00:38:20.922 slat (nsec): min=5514, max=91213, avg=21912.50, stdev=16424.35 00:38:20.922 clat (usec): min=15068, max=41428, avg=32878.42, stdev=1733.11 00:38:20.922 lat (usec): min=15077, max=41435, avg=32900.33, stdev=1732.33 00:38:20.922 clat percentiles (usec): 00:38:20.922 | 1.00th=[26870], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:38:20.922 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:38:20.922 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:38:20.922 | 99.00th=[35390], 99.50th=[37487], 99.90th=[41157], 99.95th=[41157], 00:38:20.922 | 99.99th=[41681] 00:38:20.922 bw ( KiB/s): min= 1920, max= 2048, per=4.08%, avg=1933.47, stdev=40.36, samples=19 00:38:20.922 iops : min= 480, max= 512, avg=483.37, stdev=10.09, samples=19 00:38:20.922 lat (msec) : 20=0.66%, 50=99.34% 00:38:20.922 cpu : usr=98.61%, sys=1.05%, ctx=19, majf=0, minf=36 00:38:20.922 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:20.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.922 filename1: (groupid=0, jobs=1): err= 0: pid=964356: Mon Nov 25 13:12:59 2024 00:38:20.922 read: IOPS=490, BW=1961KiB/s (2008kB/s)(19.2MiB/10008msec) 00:38:20.922 slat (nsec): min=5470, max=92798, avg=13535.76, stdev=11673.30 00:38:20.922 clat (usec): min=9810, max=57761, avg=32575.46, stdev=4577.65 00:38:20.922 lat (usec): min=9816, max=57780, avg=32589.00, stdev=4578.19 00:38:20.922 clat percentiles (usec): 00:38:20.922 | 1.00th=[18744], 5.00th=[24249], 10.00th=[27132], 20.00th=[32113], 00:38:20.922 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:38:20.922 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[38536], 00:38:20.922 | 99.00th=[47973], 99.50th=[52167], 99.90th=[57934], 99.95th=[57934], 00:38:20.922 | 99.99th=[57934] 00:38:20.922 bw ( KiB/s): min= 1763, max= 2192, per=4.12%, avg=1955.53, stdev=84.55, samples=19 00:38:20.922 iops : min= 440, max= 548, avg=488.84, stdev=21.23, samples=19 00:38:20.922 lat (msec) : 10=0.08%, 20=2.67%, 50=96.39%, 100=0.86% 00:38:20.922 cpu : usr=98.85%, sys=0.81%, ctx=14, majf=0, minf=33 00:38:20.922 IO depths : 1=0.3%, 2=0.7%, 4=3.1%, 8=79.1%, 16=16.8%, 32=0.0%, >=64=0.0% 00:38:20.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 complete : 0=0.0%, 4=89.6%, 8=9.0%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 issued rwts: total=4906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.922 filename2: (groupid=0, jobs=1): err= 0: pid=964357: Mon Nov 25 13:12:59 2024 00:38:20.922 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10009msec) 00:38:20.922 slat (nsec): min=5384, max=94151, avg=23550.67, stdev=16548.86 00:38:20.922 clat (usec): min=13289, max=67439, avg=32001.00, stdev=4300.72 00:38:20.922 lat (usec): min=13312, max=67454, avg=32024.55, stdev=4304.31 00:38:20.922 clat percentiles (usec): 00:38:20.922 | 1.00th=[18482], 5.00th=[23200], 10.00th=[25560], 20.00th=[31851], 00:38:20.922 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:38:20.922 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:38:20.922 | 99.00th=[45351], 99.50th=[49021], 99.90th=[67634], 99.95th=[67634], 00:38:20.922 | 99.99th=[67634] 00:38:20.922 bw ( KiB/s): min= 1920, max= 2224, per=4.19%, avg=1985.68, stdev=99.76, samples=19 00:38:20.922 iops : min= 480, max= 556, avg=496.42, stdev=24.94, samples=19 00:38:20.922 lat (msec) : 20=2.41%, 50=97.14%, 100=0.44% 00:38:20.922 cpu : usr=99.00%, sys=0.67%, ctx=16, majf=0, minf=43 00:38:20.922 IO depths : 1=4.7%, 2=9.4%, 4=20.3%, 8=57.4%, 16=8.2%, 32=0.0%, >=64=0.0% 00:38:20.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 issued rwts: total=4972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.922 filename2: (groupid=0, jobs=1): err= 0: pid=964358: Mon Nov 25 13:12:59 2024 00:38:20.922 read: IOPS=485, BW=1941KiB/s (1988kB/s)(19.0MiB/10007msec) 00:38:20.922 slat (nsec): min=5628, max=81545, avg=19145.49, stdev=13322.99 00:38:20.922 clat (usec): min=9853, max=57045, avg=32804.34, stdev=3316.75 00:38:20.922 lat (usec): min=9859, max=57065, avg=32823.49, stdev=3317.23 00:38:20.922 clat percentiles (usec): 00:38:20.922 | 1.00th=[19530], 5.00th=[27132], 10.00th=[31851], 20.00th=[32113], 00:38:20.922 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32900], 60.00th=[33162], 00:38:20.922 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:38:20.922 | 99.00th=[44303], 99.50th=[48497], 99.90th=[56886], 99.95th=[56886], 00:38:20.922 | 99.99th=[56886] 00:38:20.922 bw ( KiB/s): min= 1760, max= 2048, per=4.07%, avg=1930.11, stdev=56.24, samples=19 00:38:20.922 iops : min= 440, max= 512, avg=482.53, stdev=14.06, samples=19 00:38:20.922 lat (msec) : 10=0.04%, 20=1.01%, 50=98.46%, 100=0.49% 00:38:20.922 cpu : usr=98.98%, sys=0.70%, ctx=17, majf=0, minf=30 00:38:20.922 IO depths : 1=4.6%, 2=9.5%, 4=20.0%, 8=57.2%, 16=8.7%, 32=0.0%, >=64=0.0% 00:38:20.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 complete : 0=0.0%, 4=92.9%, 8=2.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 issued rwts: total=4856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.922 filename2: (groupid=0, jobs=1): err= 0: pid=964359: Mon Nov 25 13:12:59 2024 00:38:20.922 read: IOPS=483, BW=1935KiB/s (1981kB/s)(18.9MiB/10007msec) 00:38:20.922 slat (nsec): min=5476, max=76565, avg=17959.62, stdev=13116.37 00:38:20.922 clat (usec): min=13099, max=71068, avg=32937.42, stdev=3224.54 00:38:20.922 lat (usec): min=13105, max=71088, avg=32955.38, stdev=3223.70 00:38:20.922 clat percentiles (usec): 00:38:20.922 | 1.00th=[19530], 5.00th=[31589], 10.00th=[32113], 20.00th=[32375], 00:38:20.922 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:38:20.922 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:38:20.922 | 99.00th=[39584], 99.50th=[46924], 99.90th=[70779], 99.95th=[70779], 00:38:20.922 | 99.99th=[70779] 00:38:20.922 bw ( KiB/s): min= 1760, max= 2048, per=4.06%, avg=1923.37, stdev=53.22, samples=19 00:38:20.922 iops : min= 440, max= 512, avg=480.84, stdev=13.31, samples=19 00:38:20.922 lat (msec) : 20=1.14%, 50=98.53%, 100=0.33% 00:38:20.922 cpu : usr=98.95%, sys=0.73%, ctx=13, majf=0, minf=46 00:38:20.922 IO depths : 1=5.6%, 2=11.4%, 4=23.4%, 8=52.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:38:20.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 issued rwts: total=4840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.922 filename2: (groupid=0, jobs=1): err= 0: pid=964360: Mon Nov 25 13:12:59 2024 00:38:20.922 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10008msec) 00:38:20.922 slat (nsec): min=5517, max=65536, avg=12709.57, stdev=9119.79 00:38:20.922 clat (usec): min=9626, max=36702, avg=32700.51, stdev=2630.95 00:38:20.922 lat (usec): min=9643, max=36710, avg=32713.22, stdev=2630.36 00:38:20.922 clat percentiles (usec): 00:38:20.922 | 1.00th=[14353], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:38:20.922 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:38:20.922 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:38:20.922 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:38:20.922 | 99.99th=[36963] 00:38:20.922 bw ( KiB/s): min= 1920, max= 2304, per=4.12%, avg=1953.42, stdev=93.61, samples=19 00:38:20.922 iops : min= 480, max= 576, avg=488.32, stdev=23.36, samples=19 00:38:20.922 lat (msec) : 10=0.04%, 20=1.48%, 50=98.48% 00:38:20.922 cpu : usr=99.06%, sys=0.59%, ctx=18, majf=0, minf=36 00:38:20.922 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:20.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.922 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.922 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.922 filename2: (groupid=0, jobs=1): err= 0: pid=964362: Mon Nov 25 13:12:59 2024 00:38:20.922 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10007msec) 00:38:20.922 slat (nsec): min=5469, max=88591, avg=20123.18, stdev=13561.82 00:38:20.922 clat (usec): min=13553, max=57128, avg=32763.51, stdev=3493.19 00:38:20.922 lat (usec): min=13559, max=57148, avg=32783.64, stdev=3493.96 00:38:20.922 clat percentiles (usec): 00:38:20.922 | 1.00th=[20317], 5.00th=[27395], 10.00th=[31589], 20.00th=[32113], 00:38:20.922 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32900], 60.00th=[33162], 00:38:20.922 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:38:20.923 | 99.00th=[46400], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:38:20.923 | 99.99th=[56886] 00:38:20.923 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1936.00, stdev=56.94, samples=19 00:38:20.923 iops : min= 448, max= 512, avg=484.00, stdev=14.24, samples=19 00:38:20.923 lat (msec) : 20=0.76%, 50=98.70%, 100=0.53% 00:38:20.923 cpu : usr=98.92%, sys=0.75%, ctx=17, majf=0, minf=23 00:38:20.923 IO depths : 1=4.5%, 2=9.1%, 4=19.2%, 8=58.4%, 16=8.8%, 32=0.0%, >=64=0.0% 00:38:20.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.923 complete : 0=0.0%, 4=92.6%, 8=2.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.923 issued rwts: total=4862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.923 filename2: (groupid=0, jobs=1): err= 0: pid=964363: Mon Nov 25 13:12:59 2024 00:38:20.923 read: IOPS=492, BW=1968KiB/s (2016kB/s)(19.3MiB/10018msec) 00:38:20.923 slat (nsec): min=5470, max=83383, avg=16124.01, stdev=12202.73 00:38:20.923 clat (usec): min=11135, max=57911, avg=32375.93, stdev=4224.49 00:38:20.923 lat (usec): min=11178, max=57939, avg=32392.05, stdev=4225.52 00:38:20.923 clat percentiles (usec): 00:38:20.923 | 1.00th=[19530], 5.00th=[22938], 10.00th=[29492], 20.00th=[32113], 00:38:20.923 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[33162], 00:38:20.923 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:38:20.923 | 99.00th=[45876], 99.50th=[46400], 99.90th=[57410], 99.95th=[57410], 00:38:20.923 | 99.99th=[57934] 00:38:20.923 bw ( KiB/s): min= 1848, max= 2192, per=4.15%, avg=1968.60, stdev=92.34, samples=20 00:38:20.923 iops : min= 462, max= 548, avg=492.15, stdev=23.08, samples=20 00:38:20.923 lat (msec) : 20=1.60%, 50=98.24%, 100=0.16% 00:38:20.923 cpu : usr=99.04%, sys=0.62%, ctx=14, majf=0, minf=24 00:38:20.923 IO depths : 1=4.4%, 2=9.0%, 4=20.2%, 8=58.2%, 16=8.2%, 32=0.0%, >=64=0.0% 00:38:20.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.923 complete : 0=0.0%, 4=92.7%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.923 issued rwts: total=4930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.923 filename2: (groupid=0, jobs=1): err= 0: pid=964364: Mon Nov 25 13:12:59 2024 00:38:20.923 read: IOPS=483, BW=1935KiB/s (1981kB/s)(18.9MiB/10002msec) 00:38:20.923 slat (nsec): min=5691, max=97497, avg=30409.07, stdev=16309.73 00:38:20.923 clat (usec): min=14259, max=41168, avg=32814.66, stdev=1475.22 00:38:20.923 lat (usec): min=14268, max=41184, avg=32845.07, stdev=1475.63 00:38:20.923 clat percentiles (usec): 00:38:20.923 | 1.00th=[26870], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:20.923 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32900], 60.00th=[33162], 00:38:20.923 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:38:20.923 | 99.00th=[35390], 99.50th=[36963], 99.90th=[39584], 99.95th=[39584], 00:38:20.923 | 99.99th=[41157] 00:38:20.923 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1936.00, stdev=59.15, samples=19 00:38:20.923 iops : min= 448, max= 512, avg=484.00, stdev=14.79, samples=19 00:38:20.923 lat (msec) : 20=0.25%, 50=99.75% 00:38:20.923 cpu : usr=98.88%, sys=0.79%, ctx=17, majf=0, minf=34 00:38:20.923 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.8%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:20.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.923 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.923 issued rwts: total=4838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.923 filename2: (groupid=0, jobs=1): err= 0: pid=964365: Mon Nov 25 13:12:59 2024 00:38:20.923 read: IOPS=482, BW=1930KiB/s (1977kB/s)(18.9MiB/10005msec) 00:38:20.923 slat (nsec): min=5620, max=82671, avg=22513.82, stdev=14219.87 00:38:20.923 clat (usec): min=19428, max=51287, avg=32959.41, stdev=2320.37 00:38:20.923 lat (usec): min=19438, max=51303, avg=32981.92, stdev=2319.83 00:38:20.923 clat percentiles (usec): 00:38:20.923 | 1.00th=[22676], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:38:20.923 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:38:20.923 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:38:20.923 | 99.00th=[42730], 99.50th=[46400], 99.90th=[51119], 99.95th=[51119], 00:38:20.923 | 99.99th=[51119] 00:38:20.923 bw ( KiB/s): min= 1792, max= 2048, per=4.06%, avg=1925.05, stdev=64.23, samples=19 00:38:20.923 iops : min= 448, max= 512, avg=481.26, stdev=16.06, samples=19 00:38:20.923 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:38:20.923 cpu : usr=99.13%, sys=0.54%, ctx=15, majf=0, minf=27 00:38:20.923 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:38:20.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.923 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:20.923 issued rwts: total=4828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:20.923 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:20.923 00:38:20.923 Run status group 0 (all jobs): 00:38:20.923 READ: bw=46.3MiB/s (48.5MB/s), 1926KiB/s-2568KiB/s (1972kB/s-2629kB/s), io=464MiB (487MB), run=10002-10030msec 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.923 bdev_null0 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.923 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.923 [2024-11-25 13:12:59.340540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.924 bdev_null1 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:20.924 { 00:38:20.924 "params": { 00:38:20.924 "name": "Nvme$subsystem", 00:38:20.924 "trtype": "$TEST_TRANSPORT", 00:38:20.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:20.924 "adrfam": "ipv4", 00:38:20.924 "trsvcid": "$NVMF_PORT", 00:38:20.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:20.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:20.924 "hdgst": ${hdgst:-false}, 00:38:20.924 "ddgst": ${ddgst:-false} 00:38:20.924 }, 00:38:20.924 "method": "bdev_nvme_attach_controller" 00:38:20.924 } 00:38:20.924 EOF 00:38:20.924 )") 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:20.924 { 00:38:20.924 "params": { 00:38:20.924 "name": "Nvme$subsystem", 00:38:20.924 "trtype": "$TEST_TRANSPORT", 00:38:20.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:20.924 "adrfam": "ipv4", 00:38:20.924 "trsvcid": "$NVMF_PORT", 00:38:20.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:20.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:20.924 "hdgst": ${hdgst:-false}, 00:38:20.924 "ddgst": ${ddgst:-false} 00:38:20.924 }, 00:38:20.924 "method": "bdev_nvme_attach_controller" 00:38:20.924 } 00:38:20.924 EOF 00:38:20.924 )") 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:20.924 "params": { 00:38:20.924 "name": "Nvme0", 00:38:20.924 "trtype": "tcp", 00:38:20.924 "traddr": "10.0.0.2", 00:38:20.924 "adrfam": "ipv4", 00:38:20.924 "trsvcid": "4420", 00:38:20.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:20.924 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:20.924 "hdgst": false, 00:38:20.924 "ddgst": false 00:38:20.924 }, 00:38:20.924 "method": "bdev_nvme_attach_controller" 00:38:20.924 },{ 00:38:20.924 "params": { 00:38:20.924 "name": "Nvme1", 00:38:20.924 "trtype": "tcp", 00:38:20.924 "traddr": "10.0.0.2", 00:38:20.924 "adrfam": "ipv4", 00:38:20.924 "trsvcid": "4420", 00:38:20.924 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:20.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:20.924 "hdgst": false, 00:38:20.924 "ddgst": false 00:38:20.924 }, 00:38:20.924 "method": "bdev_nvme_attach_controller" 00:38:20.924 }' 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:20.924 13:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:20.924 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:20.924 ... 00:38:20.924 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:20.924 ... 00:38:20.924 fio-3.35 00:38:20.924 Starting 4 threads 00:38:26.220 00:38:26.220 filename0: (groupid=0, jobs=1): err= 0: pid=966548: Mon Nov 25 13:13:05 2024 00:38:26.220 read: IOPS=2160, BW=16.9MiB/s (17.7MB/s)(84.4MiB/5002msec) 00:38:26.220 slat (nsec): min=5464, max=75062, avg=8663.23, stdev=3926.84 00:38:26.220 clat (usec): min=1272, max=6191, avg=3678.39, stdev=545.28 00:38:26.220 lat (usec): min=1289, max=6196, avg=3687.05, stdev=545.01 00:38:26.220 clat percentiles (usec): 00:38:26.220 | 1.00th=[ 2540], 5.00th=[ 2868], 10.00th=[ 3097], 20.00th=[ 3392], 00:38:26.220 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3752], 00:38:26.220 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4047], 95.00th=[ 5014], 00:38:26.220 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 5997], 99.95th=[ 6128], 00:38:26.220 | 99.99th=[ 6194] 00:38:26.220 bw ( KiB/s): min=16624, max=19600, per=25.74%, avg=17292.44, stdev=924.99, samples=9 00:38:26.220 iops : min= 2078, max= 2450, avg=2161.56, stdev=115.62, samples=9 00:38:26.220 lat (msec) : 2=0.36%, 4=88.91%, 10=10.73% 00:38:26.220 cpu : usr=96.84%, sys=2.88%, ctx=8, majf=0, minf=9 00:38:26.220 IO depths : 1=0.1%, 2=1.2%, 4=70.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:26.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.220 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.220 issued rwts: total=10809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:26.220 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:26.220 filename0: (groupid=0, jobs=1): err= 0: pid=966549: Mon Nov 25 13:13:05 2024 00:38:26.220 read: IOPS=2153, BW=16.8MiB/s (17.6MB/s)(84.1MiB/5002msec) 00:38:26.220 slat (nsec): min=5469, max=63507, avg=9135.10, stdev=3096.68 00:38:26.220 clat (usec): min=1044, max=6038, avg=3690.89, stdev=412.26 00:38:26.220 lat (usec): min=1053, max=6046, avg=3700.02, stdev=412.06 00:38:26.220 clat percentiles (usec): 00:38:26.220 | 1.00th=[ 2671], 5.00th=[ 3064], 10.00th=[ 3294], 20.00th=[ 3523], 00:38:26.220 | 30.00th=[ 3556], 40.00th=[ 3589], 50.00th=[ 3752], 60.00th=[ 3752], 00:38:26.220 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4015], 95.00th=[ 4359], 00:38:26.220 | 99.00th=[ 5276], 99.50th=[ 5604], 99.90th=[ 5866], 99.95th=[ 5997], 00:38:26.220 | 99.99th=[ 6063] 00:38:26.220 bw ( KiB/s): min=16912, max=17680, per=25.70%, avg=17269.33, stdev=264.00, samples=9 00:38:26.220 iops : min= 2114, max= 2210, avg=2158.67, stdev=33.00, samples=9 00:38:26.220 lat (msec) : 2=0.16%, 4=89.66%, 10=10.18% 00:38:26.220 cpu : usr=96.84%, sys=2.88%, ctx=10, majf=0, minf=9 00:38:26.220 IO depths : 1=0.1%, 2=0.5%, 4=70.0%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:26.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.220 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.220 issued rwts: total=10771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:26.220 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:26.220 filename1: (groupid=0, jobs=1): err= 0: pid=966550: Mon Nov 25 13:13:05 2024 00:38:26.221 read: IOPS=2063, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5002msec) 00:38:26.221 slat (nsec): min=5454, max=61030, avg=8508.19, stdev=3770.23 00:38:26.221 clat (usec): min=2414, max=7891, avg=3857.01, stdev=544.83 00:38:26.221 lat (usec): min=2419, max=7919, avg=3865.52, stdev=544.79 00:38:26.221 clat percentiles (usec): 00:38:26.221 | 1.00th=[ 3064], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3556], 00:38:26.221 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:38:26.221 | 70.00th=[ 3818], 80.00th=[ 3884], 90.00th=[ 4359], 95.00th=[ 5342], 00:38:26.221 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6259], 99.95th=[ 6390], 00:38:26.221 | 99.99th=[ 7832] 00:38:26.221 bw ( KiB/s): min=15374, max=17072, per=24.57%, avg=16508.22, stdev=575.98, samples=9 00:38:26.221 iops : min= 1921, max= 2134, avg=2063.44, stdev=72.18, samples=9 00:38:26.221 lat (msec) : 4=82.93%, 10=17.07% 00:38:26.221 cpu : usr=97.12%, sys=2.64%, ctx=7, majf=0, minf=9 00:38:26.221 IO depths : 1=0.1%, 2=0.1%, 4=68.1%, 8=31.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:26.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.221 complete : 0=0.0%, 4=96.0%, 8=4.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.221 issued rwts: total=10320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:26.221 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:26.221 filename1: (groupid=0, jobs=1): err= 0: pid=966551: Mon Nov 25 13:13:05 2024 00:38:26.221 read: IOPS=2021, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5002msec) 00:38:26.221 slat (nsec): min=5459, max=37889, avg=8022.65, stdev=2667.21 00:38:26.221 clat (usec): min=1952, max=48844, avg=3936.97, stdev=1413.13 00:38:26.221 lat (usec): min=1960, max=48870, avg=3944.99, stdev=1413.22 00:38:26.221 clat percentiles (usec): 00:38:26.221 | 1.00th=[ 2933], 5.00th=[ 3228], 10.00th=[ 3425], 20.00th=[ 3556], 00:38:26.221 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:38:26.221 | 70.00th=[ 3818], 80.00th=[ 4015], 90.00th=[ 5211], 95.00th=[ 5538], 00:38:26.221 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6390], 99.95th=[49021], 00:38:26.221 | 99.99th=[49021] 00:38:26.221 bw ( KiB/s): min=14621, max=16992, per=24.07%, avg=16170.90, stdev=658.80, samples=10 00:38:26.221 iops : min= 1827, max= 2124, avg=2021.30, stdev=82.51, samples=10 00:38:26.221 lat (msec) : 2=0.06%, 4=79.41%, 10=20.45%, 50=0.08% 00:38:26.221 cpu : usr=96.68%, sys=3.08%, ctx=8, majf=0, minf=9 00:38:26.221 IO depths : 1=0.1%, 2=0.1%, 4=71.8%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:26.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.221 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:26.221 issued rwts: total=10110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:26.221 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:26.221 00:38:26.221 Run status group 0 (all jobs): 00:38:26.221 READ: bw=65.6MiB/s (68.8MB/s), 15.8MiB/s-16.9MiB/s (16.6MB/s-17.7MB/s), io=328MiB (344MB), run=5002-5002msec 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.221 00:38:26.221 real 0m24.678s 00:38:26.221 user 5m19.359s 00:38:26.221 sys 0m4.388s 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:26.221 13:13:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:26.221 ************************************ 00:38:26.221 END TEST fio_dif_rand_params 00:38:26.221 ************************************ 00:38:26.221 13:13:05 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:26.221 13:13:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:26.221 13:13:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:26.221 13:13:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:26.221 ************************************ 00:38:26.221 START TEST fio_dif_digest 00:38:26.221 ************************************ 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:26.221 bdev_null0 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:26.221 [2024-11-25 13:13:05.947782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:26.221 { 00:38:26.221 "params": { 00:38:26.221 "name": "Nvme$subsystem", 00:38:26.221 "trtype": "$TEST_TRANSPORT", 00:38:26.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:26.221 "adrfam": "ipv4", 00:38:26.221 "trsvcid": "$NVMF_PORT", 00:38:26.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:26.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:26.221 "hdgst": ${hdgst:-false}, 00:38:26.221 "ddgst": ${ddgst:-false} 00:38:26.221 }, 00:38:26.221 "method": "bdev_nvme_attach_controller" 00:38:26.221 } 00:38:26.221 EOF 00:38:26.221 )") 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:26.221 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:26.222 "params": { 00:38:26.222 "name": "Nvme0", 00:38:26.222 "trtype": "tcp", 00:38:26.222 "traddr": "10.0.0.2", 00:38:26.222 "adrfam": "ipv4", 00:38:26.222 "trsvcid": "4420", 00:38:26.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:26.222 "hdgst": true, 00:38:26.222 "ddgst": true 00:38:26.222 }, 00:38:26.222 "method": "bdev_nvme_attach_controller" 00:38:26.222 }' 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:26.222 13:13:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:26.222 13:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:26.222 13:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:26.222 13:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:26.222 13:13:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:26.792 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:26.792 ... 00:38:26.792 fio-3.35 00:38:26.792 Starting 3 threads 00:38:39.032 00:38:39.032 filename0: (groupid=0, jobs=1): err= 0: pid=968036: Mon Nov 25 13:13:16 2024 00:38:39.032 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(287MiB/10046msec) 00:38:39.032 slat (nsec): min=5869, max=33865, avg=7322.97, stdev=1433.09 00:38:39.032 clat (usec): min=8091, max=54715, avg=13115.64, stdev=2183.96 00:38:39.032 lat (usec): min=8097, max=54749, avg=13122.96, stdev=2184.19 00:38:39.032 clat percentiles (usec): 00:38:39.032 | 1.00th=[ 8979], 5.00th=[11207], 10.00th=[11600], 20.00th=[12256], 00:38:39.032 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:38:39.032 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:38:39.032 | 99.00th=[15664], 99.50th=[15795], 99.90th=[53740], 99.95th=[54264], 00:38:39.032 | 99.99th=[54789] 00:38:39.032 bw ( KiB/s): min=26315, max=30976, per=34.76%, avg=29322.15, stdev=1092.09, samples=20 00:38:39.032 iops : min= 205, max= 242, avg=229.05, stdev= 8.62, samples=20 00:38:39.032 lat (msec) : 10=2.05%, 20=97.73%, 50=0.04%, 100=0.17% 00:38:39.032 cpu : usr=94.26%, sys=5.50%, ctx=18, majf=0, minf=54 00:38:39.032 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:39.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.032 issued rwts: total=2293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:39.032 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:39.032 filename0: (groupid=0, jobs=1): err= 0: pid=968038: Mon Nov 25 13:13:16 2024 00:38:39.032 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(267MiB/10046msec) 00:38:39.032 slat (nsec): min=5888, max=90537, avg=7324.90, stdev=2341.08 00:38:39.032 clat (usec): min=8308, max=57442, avg=14073.14, stdev=3131.41 00:38:39.032 lat (usec): min=8314, max=57449, avg=14080.46, stdev=3131.38 00:38:39.032 clat percentiles (usec): 00:38:39.032 | 1.00th=[10290], 5.00th=[11994], 10.00th=[12518], 20.00th=[13042], 00:38:39.032 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:38:39.032 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:38:39.032 | 99.00th=[16712], 99.50th=[45876], 99.90th=[56361], 99.95th=[56886], 00:38:39.032 | 99.99th=[57410] 00:38:39.032 bw ( KiB/s): min=24832, max=29440, per=32.40%, avg=27328.00, stdev=1104.62, samples=20 00:38:39.032 iops : min= 194, max= 230, avg=213.50, stdev= 8.63, samples=20 00:38:39.032 lat (msec) : 10=0.80%, 20=98.69%, 50=0.09%, 100=0.42% 00:38:39.032 cpu : usr=94.29%, sys=5.47%, ctx=17, majf=0, minf=100 00:38:39.032 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:39.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.032 issued rwts: total=2137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:39.032 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:39.032 filename0: (groupid=0, jobs=1): err= 0: pid=968039: Mon Nov 25 13:13:16 2024 00:38:39.032 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(274MiB/10045msec) 00:38:39.032 slat (nsec): min=5924, max=32807, avg=7326.23, stdev=1537.54 00:38:39.032 clat (usec): min=8996, max=54019, avg=13730.80, stdev=2587.75 00:38:39.032 lat (usec): min=9012, max=54025, avg=13738.13, stdev=2587.69 00:38:39.032 clat percentiles (usec): 00:38:39.032 | 1.00th=[10028], 5.00th=[11731], 10.00th=[12256], 20.00th=[12780], 00:38:39.032 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:38:39.032 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15008], 95.00th=[15270], 00:38:39.032 | 99.00th=[16188], 99.50th=[17433], 99.90th=[53740], 99.95th=[53740], 00:38:39.032 | 99.99th=[54264] 00:38:39.032 bw ( KiB/s): min=25856, max=29184, per=33.20%, avg=28006.40, stdev=749.36, samples=20 00:38:39.032 iops : min= 202, max= 228, avg=218.80, stdev= 5.85, samples=20 00:38:39.032 lat (msec) : 10=0.96%, 20=98.68%, 50=0.05%, 100=0.32% 00:38:39.032 cpu : usr=94.14%, sys=5.61%, ctx=23, majf=0, minf=149 00:38:39.032 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:39.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:39.032 issued rwts: total=2190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:39.032 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:39.032 00:38:39.032 Run status group 0 (all jobs): 00:38:39.032 READ: bw=82.4MiB/s (86.4MB/s), 26.6MiB/s-28.5MiB/s (27.9MB/s-29.9MB/s), io=828MiB (868MB), run=10045-10046msec 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.032 00:38:39.032 real 0m11.259s 00:38:39.032 user 0m44.303s 00:38:39.032 sys 0m2.078s 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:39.032 13:13:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:39.032 ************************************ 00:38:39.032 END TEST fio_dif_digest 00:38:39.033 ************************************ 00:38:39.033 13:13:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:39.033 13:13:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:39.033 rmmod nvme_tcp 00:38:39.033 rmmod nvme_fabrics 00:38:39.033 rmmod nvme_keyring 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 957593 ']' 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 957593 00:38:39.033 13:13:17 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 957593 ']' 00:38:39.033 13:13:17 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 957593 00:38:39.033 13:13:17 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:38:39.033 13:13:17 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:39.033 13:13:17 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 957593 00:38:39.033 13:13:17 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:39.033 13:13:17 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:39.033 13:13:17 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 957593' 00:38:39.033 killing process with pid 957593 00:38:39.033 13:13:17 nvmf_dif -- common/autotest_common.sh@973 -- # kill 957593 00:38:39.033 13:13:17 nvmf_dif -- common/autotest_common.sh@978 -- # wait 957593 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:39.033 13:13:17 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:41.583 Waiting for block devices as requested 00:38:41.583 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:41.583 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:41.844 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:41.844 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:41.844 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:42.106 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:42.106 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:42.106 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:42.106 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:42.367 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:42.367 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:42.628 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:42.628 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:42.628 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:42.628 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:42.888 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:42.888 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:43.148 13:13:22 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:43.148 13:13:22 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:43.148 13:13:22 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:43.148 13:13:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:43.148 13:13:22 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:43.148 13:13:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:43.148 13:13:22 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:43.148 13:13:22 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:43.148 13:13:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.148 13:13:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:43.148 13:13:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.698 13:13:25 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:45.698 00:38:45.698 real 1m20.406s 00:38:45.698 user 8m8.074s 00:38:45.698 sys 0m22.873s 00:38:45.698 13:13:25 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:45.698 13:13:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:45.698 ************************************ 00:38:45.698 END TEST nvmf_dif 00:38:45.698 ************************************ 00:38:45.698 13:13:25 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:45.698 13:13:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:45.698 13:13:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:45.698 13:13:25 -- common/autotest_common.sh@10 -- # set +x 00:38:45.698 ************************************ 00:38:45.698 START TEST nvmf_abort_qd_sizes 00:38:45.698 ************************************ 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:45.698 * Looking for test storage... 00:38:45.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:45.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.698 --rc genhtml_branch_coverage=1 00:38:45.698 --rc genhtml_function_coverage=1 00:38:45.698 --rc genhtml_legend=1 00:38:45.698 --rc geninfo_all_blocks=1 00:38:45.698 --rc geninfo_unexecuted_blocks=1 00:38:45.698 00:38:45.698 ' 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:45.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.698 --rc genhtml_branch_coverage=1 00:38:45.698 --rc genhtml_function_coverage=1 00:38:45.698 --rc genhtml_legend=1 00:38:45.698 --rc geninfo_all_blocks=1 00:38:45.698 --rc geninfo_unexecuted_blocks=1 00:38:45.698 00:38:45.698 ' 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:45.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.698 --rc genhtml_branch_coverage=1 00:38:45.698 --rc genhtml_function_coverage=1 00:38:45.698 --rc genhtml_legend=1 00:38:45.698 --rc geninfo_all_blocks=1 00:38:45.698 --rc geninfo_unexecuted_blocks=1 00:38:45.698 00:38:45.698 ' 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:45.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.698 --rc genhtml_branch_coverage=1 00:38:45.698 --rc genhtml_function_coverage=1 00:38:45.698 --rc genhtml_legend=1 00:38:45.698 --rc geninfo_all_blocks=1 00:38:45.698 --rc geninfo_unexecuted_blocks=1 00:38:45.698 00:38:45.698 ' 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:45.698 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:45.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:45.699 13:13:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:53.841 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:53.841 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:53.841 Found net devices under 0000:31:00.0: cvl_0_0 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:53.841 Found net devices under 0000:31:00.1: cvl_0_1 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:53.841 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:53.842 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:53.842 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:53.842 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:53.842 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:53.842 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:53.842 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:53.842 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:53.842 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:53.842 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:54.104 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:54.104 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:54.104 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:54.104 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:54.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:54.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:38:54.104 00:38:54.104 --- 10.0.0.2 ping statistics --- 00:38:54.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.104 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:38:54.104 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:54.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:54.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:38:54.104 00:38:54.104 --- 10.0.0.1 ping statistics --- 00:38:54.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:54.104 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:38:54.104 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:54.104 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:54.104 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:54.104 13:13:33 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:58.334 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:58.334 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=978429 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 978429 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 978429 ']' 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:58.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:58.594 13:13:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:58.594 [2024-11-25 13:13:38.360596] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:38:58.594 [2024-11-25 13:13:38.360644] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:58.594 [2024-11-25 13:13:38.446831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:58.594 [2024-11-25 13:13:38.483902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:58.594 [2024-11-25 13:13:38.483933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:58.594 [2024-11-25 13:13:38.483941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:58.594 [2024-11-25 13:13:38.483948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:58.594 [2024-11-25 13:13:38.483953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:58.594 [2024-11-25 13:13:38.485481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:58.594 [2024-11-25 13:13:38.485592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:58.594 [2024-11-25 13:13:38.485746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.594 [2024-11-25 13:13:38.485747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:59.533 13:13:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:59.533 ************************************ 00:38:59.533 START TEST spdk_target_abort 00:38:59.533 ************************************ 00:38:59.533 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:38:59.533 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:59.533 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:59.533 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.533 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:59.794 spdk_targetn1 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:59.794 [2024-11-25 13:13:39.567995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:59.794 [2024-11-25 13:13:39.616323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:59.794 13:13:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:00.054 [2024-11-25 13:13:39.754415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:304 len:8 PRP1 0x200004abe000 PRP2 0x0 00:39:00.054 [2024-11-25 13:13:39.754445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0029 p:1 m:0 dnr:0 00:39:00.054 [2024-11-25 13:13:39.795530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1888 len:8 PRP1 0x200004abe000 PRP2 0x0 00:39:00.055 [2024-11-25 13:13:39.795549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00ee p:1 m:0 dnr:0 00:39:00.055 [2024-11-25 13:13:39.818365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2640 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:39:00.055 [2024-11-25 13:13:39.818383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:39:00.055 [2024-11-25 13:13:39.826346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2912 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:39:00.055 [2024-11-25 13:13:39.826362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:00.055 [2024-11-25 13:13:39.844823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3664 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:39:00.055 [2024-11-25 13:13:39.844839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00cc p:0 m:0 dnr:0 00:39:00.055 [2024-11-25 13:13:39.865327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:4280 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:39:00.055 [2024-11-25 13:13:39.865343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:0 00:39:03.353 Initializing NVMe Controllers 00:39:03.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:03.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:03.353 Initialization complete. Launching workers. 00:39:03.353 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13296, failed: 6 00:39:03.353 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2818, failed to submit 10484 00:39:03.353 success 703, unsuccessful 2115, failed 0 00:39:03.353 13:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:03.353 13:13:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:03.353 [2024-11-25 13:13:43.043140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200004e50000 PRP2 0x0 00:39:03.353 [2024-11-25 13:13:43.043186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:39:03.353 [2024-11-25 13:13:43.067117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:680 len:8 PRP1 0x200004e56000 PRP2 0x0 00:39:03.353 [2024-11-25 13:13:43.067143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0062 p:1 m:0 dnr:0 00:39:03.353 [2024-11-25 13:13:43.121831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:2024 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:39:03.353 [2024-11-25 13:13:43.121859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:39:03.353 [2024-11-25 13:13:43.146070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:2656 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:39:03.353 [2024-11-25 13:13:43.146095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:39:03.353 [2024-11-25 13:13:43.177999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:3360 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:39:03.353 [2024-11-25 13:13:43.178022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:00b2 p:0 m:0 dnr:0 00:39:03.353 [2024-11-25 13:13:43.193974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:3744 len:8 PRP1 0x200004e52000 PRP2 0x0 00:39:03.353 [2024-11-25 13:13:43.194005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00d7 p:0 m:0 dnr:0 00:39:03.353 [2024-11-25 13:13:43.210027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:4072 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:39:03.353 [2024-11-25 13:13:43.210050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:0004 p:1 m:0 dnr:0 00:39:05.892 [2024-11-25 13:13:45.712155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:63616 len:8 PRP1 0x200004e48000 PRP2 0x0 00:39:05.892 [2024-11-25 13:13:45.712188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:39:06.463 Initializing NVMe Controllers 00:39:06.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:06.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:06.463 Initialization complete. Launching workers. 00:39:06.463 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8834, failed: 8 00:39:06.463 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 7588 00:39:06.463 success 349, unsuccessful 905, failed 0 00:39:06.463 13:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:06.463 13:13:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:09.011 [2024-11-25 13:13:48.441118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:146 nsid:1 lba:218360 len:8 PRP1 0x200004b0e000 PRP2 0x0 00:39:09.012 [2024-11-25 13:13:48.441173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:146 cdw0:0 sqhd:0097 p:1 m:0 dnr:0 00:39:09.582 Initializing NVMe Controllers 00:39:09.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:09.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:09.582 Initialization complete. Launching workers. 00:39:09.582 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41389, failed: 1 00:39:09.582 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2671, failed to submit 38719 00:39:09.582 success 603, unsuccessful 2068, failed 0 00:39:09.583 13:13:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:09.843 13:13:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.843 13:13:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:09.843 13:13:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.843 13:13:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:09.843 13:13:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.843 13:13:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 978429 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 978429 ']' 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 978429 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 978429 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 978429' 00:39:11.757 killing process with pid 978429 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 978429 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 978429 00:39:11.757 00:39:11.757 real 0m12.265s 00:39:11.757 user 0m49.999s 00:39:11.757 sys 0m1.920s 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:11.757 ************************************ 00:39:11.757 END TEST spdk_target_abort 00:39:11.757 ************************************ 00:39:11.757 13:13:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:11.757 13:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:11.757 13:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:11.757 13:13:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:11.757 ************************************ 00:39:11.757 START TEST kernel_target_abort 00:39:11.757 ************************************ 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:11.757 13:13:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:15.962 Waiting for block devices as requested 00:39:15.962 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:15.962 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:15.963 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:15.963 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:15.963 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:15.963 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:15.963 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:16.223 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:16.223 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:16.484 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:16.484 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:16.484 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:16.744 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:16.744 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:16.744 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:16.744 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:17.006 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:17.266 13:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:17.266 13:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:17.266 13:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:39:17.266 13:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:17.266 13:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:17.266 13:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:17.266 13:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:39:17.266 13:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:17.266 13:13:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:17.266 No valid GPT data, bailing 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:17.266 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:39:17.266 00:39:17.266 Discovery Log Number of Records 2, Generation counter 2 00:39:17.266 =====Discovery Log Entry 0====== 00:39:17.266 trtype: tcp 00:39:17.266 adrfam: ipv4 00:39:17.266 subtype: current discovery subsystem 00:39:17.266 treq: not specified, sq flow control disable supported 00:39:17.267 portid: 1 00:39:17.267 trsvcid: 4420 00:39:17.267 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:17.267 traddr: 10.0.0.1 00:39:17.267 eflags: none 00:39:17.267 sectype: none 00:39:17.267 =====Discovery Log Entry 1====== 00:39:17.267 trtype: tcp 00:39:17.267 adrfam: ipv4 00:39:17.267 subtype: nvme subsystem 00:39:17.267 treq: not specified, sq flow control disable supported 00:39:17.267 portid: 1 00:39:17.267 trsvcid: 4420 00:39:17.267 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:17.267 traddr: 10.0.0.1 00:39:17.267 eflags: none 00:39:17.267 sectype: none 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:17.267 13:13:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:20.577 Initializing NVMe Controllers 00:39:20.577 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:20.577 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:20.577 Initialization complete. Launching workers. 00:39:20.577 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66752, failed: 0 00:39:20.577 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66752, failed to submit 0 00:39:20.577 success 0, unsuccessful 66752, failed 0 00:39:20.577 13:14:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:20.577 13:14:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:23.878 Initializing NVMe Controllers 00:39:23.878 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:23.878 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:23.878 Initialization complete. Launching workers. 00:39:23.878 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107478, failed: 0 00:39:23.878 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27066, failed to submit 80412 00:39:23.878 success 0, unsuccessful 27066, failed 0 00:39:23.878 13:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:23.878 13:14:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:27.178 Initializing NVMe Controllers 00:39:27.178 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:27.178 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:27.179 Initialization complete. Launching workers. 00:39:27.179 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101844, failed: 0 00:39:27.179 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25458, failed to submit 76386 00:39:27.179 success 0, unsuccessful 25458, failed 0 00:39:27.179 13:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:27.179 13:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:27.179 13:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:39:27.179 13:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:27.179 13:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:27.179 13:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:27.179 13:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:27.179 13:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:27.179 13:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:27.179 13:14:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:30.615 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:30.615 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:30.615 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:30.615 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:30.615 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:30.615 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:30.615 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:30.615 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:30.615 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:30.615 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:30.880 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:30.880 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:30.880 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:30.880 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:30.880 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:30.880 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:32.793 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:33.054 00:39:33.054 real 0m21.127s 00:39:33.054 user 0m10.272s 00:39:33.054 sys 0m6.633s 00:39:33.054 13:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:33.054 13:14:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:33.054 ************************************ 00:39:33.054 END TEST kernel_target_abort 00:39:33.054 ************************************ 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:33.054 rmmod nvme_tcp 00:39:33.054 rmmod nvme_fabrics 00:39:33.054 rmmod nvme_keyring 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 978429 ']' 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 978429 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 978429 ']' 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 978429 00:39:33.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (978429) - No such process 00:39:33.054 13:14:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 978429 is not found' 00:39:33.055 Process with pid 978429 is not found 00:39:33.055 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:33.055 13:14:12 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:37.255 Waiting for block devices as requested 00:39:37.255 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:37.255 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:37.255 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:37.255 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:37.255 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:37.255 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:37.255 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:37.255 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:37.255 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:37.516 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:37.516 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:37.516 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:37.516 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:37.775 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:37.775 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:37.775 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:37.775 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:38.346 13:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:38.346 13:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:38.346 13:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:38.346 13:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:39:38.346 13:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:38.346 13:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:39:38.346 13:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:38.346 13:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:38.346 13:14:17 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.346 13:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:38.346 13:14:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:40.258 13:14:20 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:40.258 00:39:40.258 real 0m54.919s 00:39:40.258 user 1m6.190s 00:39:40.258 sys 0m20.883s 00:39:40.258 13:14:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:40.258 13:14:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:40.258 ************************************ 00:39:40.258 END TEST nvmf_abort_qd_sizes 00:39:40.258 ************************************ 00:39:40.258 13:14:20 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:40.259 13:14:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:40.259 13:14:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:40.259 13:14:20 -- common/autotest_common.sh@10 -- # set +x 00:39:40.259 ************************************ 00:39:40.259 START TEST keyring_file 00:39:40.259 ************************************ 00:39:40.259 13:14:20 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:40.520 * Looking for test storage... 00:39:40.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:40.520 13:14:20 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:40.520 13:14:20 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:39:40.520 13:14:20 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:40.520 13:14:20 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:40.520 13:14:20 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:40.520 13:14:20 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:40.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.520 --rc genhtml_branch_coverage=1 00:39:40.520 --rc genhtml_function_coverage=1 00:39:40.520 --rc genhtml_legend=1 00:39:40.520 --rc geninfo_all_blocks=1 00:39:40.520 --rc geninfo_unexecuted_blocks=1 00:39:40.520 00:39:40.520 ' 00:39:40.520 13:14:20 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:40.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.520 --rc genhtml_branch_coverage=1 00:39:40.520 --rc genhtml_function_coverage=1 00:39:40.520 --rc genhtml_legend=1 00:39:40.520 --rc geninfo_all_blocks=1 00:39:40.520 --rc geninfo_unexecuted_blocks=1 00:39:40.520 00:39:40.520 ' 00:39:40.520 13:14:20 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:40.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.520 --rc genhtml_branch_coverage=1 00:39:40.520 --rc genhtml_function_coverage=1 00:39:40.520 --rc genhtml_legend=1 00:39:40.520 --rc geninfo_all_blocks=1 00:39:40.520 --rc geninfo_unexecuted_blocks=1 00:39:40.520 00:39:40.520 ' 00:39:40.520 13:14:20 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:40.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.520 --rc genhtml_branch_coverage=1 00:39:40.520 --rc genhtml_function_coverage=1 00:39:40.520 --rc genhtml_legend=1 00:39:40.520 --rc geninfo_all_blocks=1 00:39:40.520 --rc geninfo_unexecuted_blocks=1 00:39:40.520 00:39:40.520 ' 00:39:40.520 13:14:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:40.520 13:14:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:40.520 13:14:20 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:40.520 13:14:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.520 13:14:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.520 13:14:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.520 13:14:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:40.520 13:14:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:40.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:40.520 13:14:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:40.520 13:14:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:40.520 13:14:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:40.520 13:14:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:40.520 13:14:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:40.520 13:14:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:40.520 13:14:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:40.520 13:14:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:40.520 13:14:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:40.520 13:14:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:40.520 13:14:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:40.520 13:14:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:40.520 13:14:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YdxOJLzi7d 00:39:40.520 13:14:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:40.520 13:14:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:40.782 13:14:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YdxOJLzi7d 00:39:40.782 13:14:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YdxOJLzi7d 00:39:40.782 13:14:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.YdxOJLzi7d 00:39:40.782 13:14:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:40.782 13:14:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:40.782 13:14:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:40.782 13:14:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:40.782 13:14:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:40.782 13:14:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:40.782 13:14:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FVkmayfdSb 00:39:40.782 13:14:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:40.782 13:14:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:40.782 13:14:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:40.782 13:14:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:40.782 13:14:20 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:40.782 13:14:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:40.782 13:14:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:40.782 13:14:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FVkmayfdSb 00:39:40.782 13:14:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FVkmayfdSb 00:39:40.782 13:14:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.FVkmayfdSb 00:39:40.782 13:14:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=989334 00:39:40.782 13:14:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 989334 00:39:40.782 13:14:20 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:40.782 13:14:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 989334 ']' 00:39:40.782 13:14:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:40.782 13:14:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:40.782 13:14:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:40.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:40.782 13:14:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:40.782 13:14:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:40.782 [2024-11-25 13:14:20.540607] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:39:40.782 [2024-11-25 13:14:20.540666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989334 ] 00:39:40.782 [2024-11-25 13:14:20.619622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.782 [2024-11-25 13:14:20.655880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:41.042 13:14:20 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:41.042 [2024-11-25 13:14:20.846222] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:41.042 null0 00:39:41.042 [2024-11-25 13:14:20.878270] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:41.042 [2024-11-25 13:14:20.878631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.042 13:14:20 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:41.042 [2024-11-25 13:14:20.910349] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:41.042 request: 00:39:41.042 { 00:39:41.042 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:41.042 "secure_channel": false, 00:39:41.042 "listen_address": { 00:39:41.042 "trtype": "tcp", 00:39:41.042 "traddr": "127.0.0.1", 00:39:41.042 "trsvcid": "4420" 00:39:41.042 }, 00:39:41.042 "method": "nvmf_subsystem_add_listener", 00:39:41.042 "req_id": 1 00:39:41.042 } 00:39:41.042 Got JSON-RPC error response 00:39:41.042 response: 00:39:41.042 { 00:39:41.042 "code": -32602, 00:39:41.042 "message": "Invalid parameters" 00:39:41.042 } 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:41.042 13:14:20 keyring_file -- keyring/file.sh@47 -- # bperfpid=989338 00:39:41.042 13:14:20 keyring_file -- keyring/file.sh@49 -- # waitforlisten 989338 /var/tmp/bperf.sock 00:39:41.042 13:14:20 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 989338 ']' 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:41.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:41.042 13:14:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:41.303 [2024-11-25 13:14:20.969011] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:39:41.303 [2024-11-25 13:14:20.969057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989338 ] 00:39:41.303 [2024-11-25 13:14:21.061506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.303 [2024-11-25 13:14:21.097496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:41.873 13:14:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:41.873 13:14:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:41.873 13:14:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YdxOJLzi7d 00:39:41.873 13:14:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YdxOJLzi7d 00:39:42.134 13:14:21 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FVkmayfdSb 00:39:42.134 13:14:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FVkmayfdSb 00:39:42.394 13:14:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:42.395 13:14:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:42.395 13:14:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:42.395 13:14:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.395 13:14:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:42.395 13:14:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.YdxOJLzi7d == \/\t\m\p\/\t\m\p\.\Y\d\x\O\J\L\z\i\7\d ]] 00:39:42.395 13:14:22 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:42.395 13:14:22 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:42.395 13:14:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:42.395 13:14:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.395 13:14:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:42.656 13:14:22 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.FVkmayfdSb == \/\t\m\p\/\t\m\p\.\F\V\k\m\a\y\f\d\S\b ]] 00:39:42.656 13:14:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:42.656 13:14:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:42.656 13:14:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:42.656 13:14:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:42.656 13:14:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.656 13:14:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:42.917 13:14:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:42.917 13:14:22 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:42.917 13:14:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:42.917 13:14:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:42.917 13:14:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:42.917 13:14:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:42.917 13:14:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:42.917 13:14:22 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:42.917 13:14:22 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:42.917 13:14:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:43.177 [2024-11-25 13:14:22.935916] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:43.177 nvme0n1 00:39:43.177 13:14:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:43.177 13:14:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:43.177 13:14:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:43.177 13:14:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:43.177 13:14:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:43.177 13:14:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:43.438 13:14:23 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:43.438 13:14:23 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:43.438 13:14:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:43.438 13:14:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:43.438 13:14:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:43.438 13:14:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:43.438 13:14:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:43.698 13:14:23 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:43.698 13:14:23 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:43.698 Running I/O for 1 seconds... 00:39:44.641 16402.00 IOPS, 64.07 MiB/s 00:39:44.641 Latency(us) 00:39:44.641 [2024-11-25T12:14:24.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:44.641 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:44.641 nvme0n1 : 1.01 16411.30 64.11 0.00 0.00 7768.85 6007.47 18459.31 00:39:44.641 [2024-11-25T12:14:24.544Z] =================================================================================================================== 00:39:44.641 [2024-11-25T12:14:24.544Z] Total : 16411.30 64.11 0.00 0.00 7768.85 6007.47 18459.31 00:39:44.641 { 00:39:44.641 "results": [ 00:39:44.641 { 00:39:44.641 "job": "nvme0n1", 00:39:44.641 "core_mask": "0x2", 00:39:44.641 "workload": "randrw", 00:39:44.641 "percentage": 50, 00:39:44.641 "status": "finished", 00:39:44.641 "queue_depth": 128, 00:39:44.641 "io_size": 4096, 00:39:44.641 "runtime": 1.007294, 00:39:44.641 "iops": 16411.296006925484, 00:39:44.641 "mibps": 64.10662502705267, 00:39:44.641 "io_failed": 0, 00:39:44.641 "io_timeout": 0, 00:39:44.641 "avg_latency_us": 7768.848255197307, 00:39:44.641 "min_latency_us": 6007.466666666666, 00:39:44.641 "max_latency_us": 18459.306666666667 00:39:44.641 } 00:39:44.641 ], 00:39:44.641 "core_count": 1 00:39:44.641 } 00:39:44.641 13:14:24 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:44.641 13:14:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:44.901 13:14:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:44.901 13:14:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:44.901 13:14:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:44.901 13:14:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:44.901 13:14:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:44.901 13:14:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:45.162 13:14:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:45.162 13:14:24 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:45.162 13:14:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:45.162 13:14:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:45.162 13:14:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:45.162 13:14:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:45.162 13:14:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:45.162 13:14:25 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:45.162 13:14:25 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:45.162 13:14:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:45.162 13:14:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:45.162 13:14:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:45.162 13:14:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:45.162 13:14:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:45.162 13:14:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:45.162 13:14:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:45.162 13:14:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:45.423 [2024-11-25 13:14:25.191403] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:45.423 [2024-11-25 13:14:25.192247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d7a10 (107): Transport endpoint is not connected 00:39:45.423 [2024-11-25 13:14:25.193242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d7a10 (9): Bad file descriptor 00:39:45.423 [2024-11-25 13:14:25.194244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:45.423 [2024-11-25 13:14:25.194252] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:45.423 [2024-11-25 13:14:25.194257] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:45.423 [2024-11-25 13:14:25.194264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:45.423 request: 00:39:45.423 { 00:39:45.423 "name": "nvme0", 00:39:45.423 "trtype": "tcp", 00:39:45.423 "traddr": "127.0.0.1", 00:39:45.423 "adrfam": "ipv4", 00:39:45.423 "trsvcid": "4420", 00:39:45.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:45.423 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:45.423 "prchk_reftag": false, 00:39:45.423 "prchk_guard": false, 00:39:45.423 "hdgst": false, 00:39:45.423 "ddgst": false, 00:39:45.423 "psk": "key1", 00:39:45.423 "allow_unrecognized_csi": false, 00:39:45.423 "method": "bdev_nvme_attach_controller", 00:39:45.423 "req_id": 1 00:39:45.423 } 00:39:45.423 Got JSON-RPC error response 00:39:45.423 response: 00:39:45.423 { 00:39:45.423 "code": -5, 00:39:45.423 "message": "Input/output error" 00:39:45.423 } 00:39:45.423 13:14:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:45.423 13:14:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:45.423 13:14:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:45.423 13:14:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:45.423 13:14:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:45.423 13:14:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:45.423 13:14:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:45.423 13:14:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:45.423 13:14:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:45.423 13:14:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:45.683 13:14:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:45.683 13:14:25 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:45.683 13:14:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:45.683 13:14:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:45.683 13:14:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:45.683 13:14:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:45.683 13:14:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:45.683 13:14:25 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:45.683 13:14:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:45.683 13:14:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:45.943 13:14:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:45.943 13:14:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:46.203 13:14:25 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:46.203 13:14:25 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:46.203 13:14:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:46.203 13:14:26 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:46.203 13:14:26 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.YdxOJLzi7d 00:39:46.203 13:14:26 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.YdxOJLzi7d 00:39:46.203 13:14:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:46.203 13:14:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.YdxOJLzi7d 00:39:46.203 13:14:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:46.203 13:14:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:46.203 13:14:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:46.203 13:14:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:46.203 13:14:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YdxOJLzi7d 00:39:46.203 13:14:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YdxOJLzi7d 00:39:46.464 [2024-11-25 13:14:26.215895] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YdxOJLzi7d': 0100660 00:39:46.464 [2024-11-25 13:14:26.215915] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:46.464 request: 00:39:46.464 { 00:39:46.464 "name": "key0", 00:39:46.464 "path": "/tmp/tmp.YdxOJLzi7d", 00:39:46.464 "method": "keyring_file_add_key", 00:39:46.464 "req_id": 1 00:39:46.464 } 00:39:46.464 Got JSON-RPC error response 00:39:46.464 response: 00:39:46.464 { 00:39:46.464 "code": -1, 00:39:46.464 "message": "Operation not permitted" 00:39:46.464 } 00:39:46.464 13:14:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:46.464 13:14:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:46.464 13:14:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:46.464 13:14:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:46.464 13:14:26 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.YdxOJLzi7d 00:39:46.464 13:14:26 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YdxOJLzi7d 00:39:46.464 13:14:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YdxOJLzi7d 00:39:46.726 13:14:26 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.YdxOJLzi7d 00:39:46.726 13:14:26 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:46.726 13:14:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:46.726 13:14:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:46.726 13:14:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:46.726 13:14:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:46.726 13:14:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:46.726 13:14:26 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:46.726 13:14:26 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:46.726 13:14:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:46.726 13:14:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:46.726 13:14:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:46.726 13:14:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:46.726 13:14:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:46.726 13:14:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:46.726 13:14:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:46.726 13:14:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:46.987 [2024-11-25 13:14:26.749248] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.YdxOJLzi7d': No such file or directory 00:39:46.987 [2024-11-25 13:14:26.749260] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:46.987 [2024-11-25 13:14:26.749273] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:46.987 [2024-11-25 13:14:26.749279] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:46.987 [2024-11-25 13:14:26.749288] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:46.987 [2024-11-25 13:14:26.749293] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:46.987 request: 00:39:46.987 { 00:39:46.987 "name": "nvme0", 00:39:46.987 "trtype": "tcp", 00:39:46.987 "traddr": "127.0.0.1", 00:39:46.987 "adrfam": "ipv4", 00:39:46.987 "trsvcid": "4420", 00:39:46.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:46.987 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:46.987 "prchk_reftag": false, 00:39:46.987 "prchk_guard": false, 00:39:46.987 "hdgst": false, 00:39:46.987 "ddgst": false, 00:39:46.987 "psk": "key0", 00:39:46.987 "allow_unrecognized_csi": false, 00:39:46.987 "method": "bdev_nvme_attach_controller", 00:39:46.987 "req_id": 1 00:39:46.987 } 00:39:46.987 Got JSON-RPC error response 00:39:46.987 response: 00:39:46.987 { 00:39:46.987 "code": -19, 00:39:46.987 "message": "No such device" 00:39:46.987 } 00:39:46.987 13:14:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:46.987 13:14:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:46.987 13:14:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:46.987 13:14:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:46.987 13:14:26 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:46.987 13:14:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:47.247 13:14:26 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:47.247 13:14:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:47.247 13:14:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:47.247 13:14:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:47.247 13:14:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:47.247 13:14:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:47.247 13:14:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HLFmAI4cLe 00:39:47.247 13:14:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:47.247 13:14:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:47.247 13:14:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:47.247 13:14:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:47.247 13:14:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:47.247 13:14:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:47.247 13:14:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:47.247 13:14:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HLFmAI4cLe 00:39:47.247 13:14:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HLFmAI4cLe 00:39:47.247 13:14:26 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.HLFmAI4cLe 00:39:47.247 13:14:26 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HLFmAI4cLe 00:39:47.247 13:14:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HLFmAI4cLe 00:39:47.507 13:14:27 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:47.507 13:14:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:47.507 nvme0n1 00:39:47.507 13:14:27 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:47.507 13:14:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:47.507 13:14:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:47.507 13:14:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:47.507 13:14:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:47.507 13:14:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:47.768 13:14:27 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:47.768 13:14:27 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:47.768 13:14:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:48.028 13:14:27 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:48.028 13:14:27 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:48.028 13:14:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.028 13:14:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:48.028 13:14:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.028 13:14:27 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:48.028 13:14:27 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:48.289 13:14:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:48.289 13:14:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:48.289 13:14:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:48.289 13:14:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:48.289 13:14:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.289 13:14:28 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:48.289 13:14:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:48.289 13:14:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:48.549 13:14:28 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:48.549 13:14:28 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:48.549 13:14:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:48.549 13:14:28 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:48.549 13:14:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HLFmAI4cLe 00:39:48.549 13:14:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HLFmAI4cLe 00:39:48.810 13:14:28 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FVkmayfdSb 00:39:48.810 13:14:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FVkmayfdSb 00:39:49.069 13:14:28 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:49.069 13:14:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:49.329 nvme0n1 00:39:49.329 13:14:29 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:49.329 13:14:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:49.588 13:14:29 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:49.588 "subsystems": [ 00:39:49.588 { 00:39:49.588 "subsystem": "keyring", 00:39:49.588 "config": [ 00:39:49.588 { 00:39:49.588 "method": "keyring_file_add_key", 00:39:49.588 "params": { 00:39:49.588 "name": "key0", 00:39:49.588 "path": "/tmp/tmp.HLFmAI4cLe" 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "keyring_file_add_key", 00:39:49.588 "params": { 00:39:49.588 "name": "key1", 00:39:49.588 "path": "/tmp/tmp.FVkmayfdSb" 00:39:49.588 } 00:39:49.588 } 00:39:49.588 ] 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "subsystem": "iobuf", 00:39:49.588 "config": [ 00:39:49.588 { 00:39:49.588 "method": "iobuf_set_options", 00:39:49.588 "params": { 00:39:49.588 "small_pool_count": 8192, 00:39:49.588 "large_pool_count": 1024, 00:39:49.588 "small_bufsize": 8192, 00:39:49.588 "large_bufsize": 135168, 00:39:49.588 "enable_numa": false 00:39:49.588 } 00:39:49.588 } 00:39:49.588 ] 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "subsystem": "sock", 00:39:49.588 "config": [ 00:39:49.588 { 00:39:49.588 "method": "sock_set_default_impl", 00:39:49.588 "params": { 00:39:49.588 "impl_name": "posix" 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "sock_impl_set_options", 00:39:49.588 "params": { 00:39:49.588 "impl_name": "ssl", 00:39:49.588 "recv_buf_size": 4096, 00:39:49.588 "send_buf_size": 4096, 00:39:49.588 "enable_recv_pipe": true, 00:39:49.588 "enable_quickack": false, 00:39:49.588 "enable_placement_id": 0, 00:39:49.588 "enable_zerocopy_send_server": true, 00:39:49.588 "enable_zerocopy_send_client": false, 00:39:49.588 "zerocopy_threshold": 0, 00:39:49.588 "tls_version": 0, 00:39:49.588 "enable_ktls": false 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "sock_impl_set_options", 00:39:49.588 "params": { 00:39:49.588 "impl_name": "posix", 00:39:49.588 "recv_buf_size": 2097152, 00:39:49.588 "send_buf_size": 2097152, 00:39:49.588 "enable_recv_pipe": true, 00:39:49.588 "enable_quickack": false, 00:39:49.588 "enable_placement_id": 0, 00:39:49.588 "enable_zerocopy_send_server": true, 00:39:49.588 "enable_zerocopy_send_client": false, 00:39:49.588 "zerocopy_threshold": 0, 00:39:49.588 "tls_version": 0, 00:39:49.588 "enable_ktls": false 00:39:49.588 } 00:39:49.588 } 00:39:49.588 ] 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "subsystem": "vmd", 00:39:49.588 "config": [] 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "subsystem": "accel", 00:39:49.588 "config": [ 00:39:49.588 { 00:39:49.588 "method": "accel_set_options", 00:39:49.588 "params": { 00:39:49.588 "small_cache_size": 128, 00:39:49.588 "large_cache_size": 16, 00:39:49.588 "task_count": 2048, 00:39:49.588 "sequence_count": 2048, 00:39:49.588 "buf_count": 2048 00:39:49.588 } 00:39:49.588 } 00:39:49.588 ] 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "subsystem": "bdev", 00:39:49.588 "config": [ 00:39:49.588 { 00:39:49.588 "method": "bdev_set_options", 00:39:49.588 "params": { 00:39:49.588 "bdev_io_pool_size": 65535, 00:39:49.588 "bdev_io_cache_size": 256, 00:39:49.588 "bdev_auto_examine": true, 00:39:49.588 "iobuf_small_cache_size": 128, 00:39:49.588 "iobuf_large_cache_size": 16 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "bdev_raid_set_options", 00:39:49.588 "params": { 00:39:49.588 "process_window_size_kb": 1024, 00:39:49.588 "process_max_bandwidth_mb_sec": 0 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "bdev_iscsi_set_options", 00:39:49.588 "params": { 00:39:49.588 "timeout_sec": 30 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "bdev_nvme_set_options", 00:39:49.588 "params": { 00:39:49.588 "action_on_timeout": "none", 00:39:49.588 "timeout_us": 0, 00:39:49.588 "timeout_admin_us": 0, 00:39:49.588 "keep_alive_timeout_ms": 10000, 00:39:49.588 "arbitration_burst": 0, 00:39:49.588 "low_priority_weight": 0, 00:39:49.588 "medium_priority_weight": 0, 00:39:49.588 "high_priority_weight": 0, 00:39:49.588 "nvme_adminq_poll_period_us": 10000, 00:39:49.588 "nvme_ioq_poll_period_us": 0, 00:39:49.588 "io_queue_requests": 512, 00:39:49.588 "delay_cmd_submit": true, 00:39:49.588 "transport_retry_count": 4, 00:39:49.588 "bdev_retry_count": 3, 00:39:49.588 "transport_ack_timeout": 0, 00:39:49.588 "ctrlr_loss_timeout_sec": 0, 00:39:49.588 "reconnect_delay_sec": 0, 00:39:49.588 "fast_io_fail_timeout_sec": 0, 00:39:49.588 "disable_auto_failback": false, 00:39:49.588 "generate_uuids": false, 00:39:49.588 "transport_tos": 0, 00:39:49.588 "nvme_error_stat": false, 00:39:49.588 "rdma_srq_size": 0, 00:39:49.588 "io_path_stat": false, 00:39:49.588 "allow_accel_sequence": false, 00:39:49.588 "rdma_max_cq_size": 0, 00:39:49.588 "rdma_cm_event_timeout_ms": 0, 00:39:49.588 "dhchap_digests": [ 00:39:49.588 "sha256", 00:39:49.588 "sha384", 00:39:49.588 "sha512" 00:39:49.588 ], 00:39:49.588 "dhchap_dhgroups": [ 00:39:49.588 "null", 00:39:49.588 "ffdhe2048", 00:39:49.588 "ffdhe3072", 00:39:49.588 "ffdhe4096", 00:39:49.588 "ffdhe6144", 00:39:49.588 "ffdhe8192" 00:39:49.588 ] 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "bdev_nvme_attach_controller", 00:39:49.588 "params": { 00:39:49.588 "name": "nvme0", 00:39:49.588 "trtype": "TCP", 00:39:49.588 "adrfam": "IPv4", 00:39:49.588 "traddr": "127.0.0.1", 00:39:49.588 "trsvcid": "4420", 00:39:49.588 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:49.588 "prchk_reftag": false, 00:39:49.588 "prchk_guard": false, 00:39:49.588 "ctrlr_loss_timeout_sec": 0, 00:39:49.588 "reconnect_delay_sec": 0, 00:39:49.588 "fast_io_fail_timeout_sec": 0, 00:39:49.588 "psk": "key0", 00:39:49.588 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:49.588 "hdgst": false, 00:39:49.588 "ddgst": false, 00:39:49.588 "multipath": "multipath" 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "bdev_nvme_set_hotplug", 00:39:49.588 "params": { 00:39:49.588 "period_us": 100000, 00:39:49.588 "enable": false 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "bdev_wait_for_examine" 00:39:49.588 } 00:39:49.588 ] 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "subsystem": "nbd", 00:39:49.588 "config": [] 00:39:49.588 } 00:39:49.588 ] 00:39:49.588 }' 00:39:49.588 13:14:29 keyring_file -- keyring/file.sh@115 -- # killprocess 989338 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 989338 ']' 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@958 -- # kill -0 989338 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 989338 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 989338' 00:39:49.588 killing process with pid 989338 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@973 -- # kill 989338 00:39:49.588 Received shutdown signal, test time was about 1.000000 seconds 00:39:49.588 00:39:49.588 Latency(us) 00:39:49.588 [2024-11-25T12:14:29.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:49.588 [2024-11-25T12:14:29.491Z] =================================================================================================================== 00:39:49.588 [2024-11-25T12:14:29.491Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@978 -- # wait 989338 00:39:49.588 13:14:29 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:49.588 13:14:29 keyring_file -- keyring/file.sh@118 -- # bperfpid=991144 00:39:49.588 13:14:29 keyring_file -- keyring/file.sh@120 -- # waitforlisten 991144 /var/tmp/bperf.sock 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 991144 ']' 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:49.588 13:14:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:49.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:49.588 13:14:29 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:49.588 "subsystems": [ 00:39:49.588 { 00:39:49.588 "subsystem": "keyring", 00:39:49.588 "config": [ 00:39:49.588 { 00:39:49.588 "method": "keyring_file_add_key", 00:39:49.588 "params": { 00:39:49.588 "name": "key0", 00:39:49.588 "path": "/tmp/tmp.HLFmAI4cLe" 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "keyring_file_add_key", 00:39:49.588 "params": { 00:39:49.588 "name": "key1", 00:39:49.588 "path": "/tmp/tmp.FVkmayfdSb" 00:39:49.588 } 00:39:49.588 } 00:39:49.588 ] 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "subsystem": "iobuf", 00:39:49.588 "config": [ 00:39:49.588 { 00:39:49.588 "method": "iobuf_set_options", 00:39:49.588 "params": { 00:39:49.588 "small_pool_count": 8192, 00:39:49.588 "large_pool_count": 1024, 00:39:49.588 "small_bufsize": 8192, 00:39:49.588 "large_bufsize": 135168, 00:39:49.588 "enable_numa": false 00:39:49.588 } 00:39:49.588 } 00:39:49.588 ] 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "subsystem": "sock", 00:39:49.588 "config": [ 00:39:49.588 { 00:39:49.588 "method": "sock_set_default_impl", 00:39:49.588 "params": { 00:39:49.588 "impl_name": "posix" 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "sock_impl_set_options", 00:39:49.588 "params": { 00:39:49.588 "impl_name": "ssl", 00:39:49.588 "recv_buf_size": 4096, 00:39:49.588 "send_buf_size": 4096, 00:39:49.588 "enable_recv_pipe": true, 00:39:49.588 "enable_quickack": false, 00:39:49.588 "enable_placement_id": 0, 00:39:49.588 "enable_zerocopy_send_server": true, 00:39:49.588 "enable_zerocopy_send_client": false, 00:39:49.588 "zerocopy_threshold": 0, 00:39:49.588 "tls_version": 0, 00:39:49.588 "enable_ktls": false 00:39:49.588 } 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "method": "sock_impl_set_options", 00:39:49.588 "params": { 00:39:49.588 "impl_name": "posix", 00:39:49.588 "recv_buf_size": 2097152, 00:39:49.588 "send_buf_size": 2097152, 00:39:49.588 "enable_recv_pipe": true, 00:39:49.588 "enable_quickack": false, 00:39:49.588 "enable_placement_id": 0, 00:39:49.588 "enable_zerocopy_send_server": true, 00:39:49.588 "enable_zerocopy_send_client": false, 00:39:49.588 "zerocopy_threshold": 0, 00:39:49.588 "tls_version": 0, 00:39:49.588 "enable_ktls": false 00:39:49.588 } 00:39:49.588 } 00:39:49.588 ] 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "subsystem": "vmd", 00:39:49.588 "config": [] 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "subsystem": "accel", 00:39:49.588 "config": [ 00:39:49.588 { 00:39:49.588 "method": "accel_set_options", 00:39:49.588 "params": { 00:39:49.588 "small_cache_size": 128, 00:39:49.588 "large_cache_size": 16, 00:39:49.588 "task_count": 2048, 00:39:49.588 "sequence_count": 2048, 00:39:49.588 "buf_count": 2048 00:39:49.588 } 00:39:49.588 } 00:39:49.588 ] 00:39:49.588 }, 00:39:49.588 { 00:39:49.588 "subsystem": "bdev", 00:39:49.588 "config": [ 00:39:49.588 { 00:39:49.588 "method": "bdev_set_options", 00:39:49.588 "params": { 00:39:49.588 "bdev_io_pool_size": 65535, 00:39:49.588 "bdev_io_cache_size": 256, 00:39:49.589 "bdev_auto_examine": true, 00:39:49.589 "iobuf_small_cache_size": 128, 00:39:49.589 "iobuf_large_cache_size": 16 00:39:49.589 } 00:39:49.589 }, 00:39:49.589 { 00:39:49.589 "method": "bdev_raid_set_options", 00:39:49.589 "params": { 00:39:49.589 "process_window_size_kb": 1024, 00:39:49.589 "process_max_bandwidth_mb_sec": 0 00:39:49.589 } 00:39:49.589 }, 00:39:49.589 { 00:39:49.589 "method": "bdev_iscsi_set_options", 00:39:49.589 "params": { 00:39:49.589 "timeout_sec": 30 00:39:49.589 } 00:39:49.589 }, 00:39:49.589 { 00:39:49.589 "method": "bdev_nvme_set_options", 00:39:49.589 "params": { 00:39:49.589 "action_on_timeout": "none", 00:39:49.589 "timeout_us": 0, 00:39:49.589 "timeout_admin_us": 0, 00:39:49.589 "keep_alive_timeout_ms": 10000, 00:39:49.589 "arbitration_burst": 0, 00:39:49.589 "low_priority_weight": 0, 00:39:49.589 "medium_priority_weight": 0, 00:39:49.589 "high_priority_weight": 0, 00:39:49.589 "nvme_adminq_poll_period_us": 10000, 00:39:49.589 "nvme_ioq_poll_period_us": 0, 00:39:49.589 "io_queue_requests": 512, 00:39:49.589 "delay_cmd_submit": true, 00:39:49.589 "transport_retry_count": 4, 00:39:49.589 "bdev_retry_count": 3, 00:39:49.589 "transport_ack_timeout": 0, 00:39:49.589 "ctrlr_loss_timeout_sec": 0, 00:39:49.589 "reconnect_delay_sec": 0, 00:39:49.589 "fast_io_fail_timeout_sec": 0, 00:39:49.589 "disable_auto_failback": false, 00:39:49.589 "generate_uuids": false, 00:39:49.589 "transport_tos": 0, 00:39:49.589 "nvme_error_stat": false, 00:39:49.589 "rdma_srq_size": 0, 00:39:49.589 "io_path_stat": false, 00:39:49.589 "allow_accel_sequence": false, 00:39:49.589 "rdma_max_cq_size": 0, 00:39:49.589 "rdma_cm_event_timeout_ms": 0, 00:39:49.589 "dhchap_digests": [ 00:39:49.589 "sha256", 00:39:49.589 "sha384", 00:39:49.589 "sha512" 00:39:49.589 ], 00:39:49.589 "dhchap_dhgroups": [ 00:39:49.589 "null", 00:39:49.589 "ffdhe2048", 00:39:49.589 "ffdhe3072", 00:39:49.589 "ffdhe4096", 00:39:49.589 "ffdhe6144", 00:39:49.589 "ffdhe8192" 00:39:49.589 ] 00:39:49.589 } 00:39:49.589 }, 00:39:49.589 { 00:39:49.589 "method": "bdev_nvme_attach_controller", 00:39:49.589 "params": { 00:39:49.589 "name": "nvme0", 00:39:49.589 "trtype": "TCP", 00:39:49.589 "adrfam": "IPv4", 00:39:49.589 "traddr": "127.0.0.1", 00:39:49.589 "trsvcid": "4420", 00:39:49.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:49.589 "prchk_reftag": false, 00:39:49.589 "prchk_guard": false, 00:39:49.589 "ctrlr_loss_timeout_sec": 0, 00:39:49.589 "reconnect_delay_sec": 0, 00:39:49.589 "fast_io_fail_timeout_sec": 0, 00:39:49.589 "psk": "key0", 00:39:49.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:49.589 "hdgst": false, 00:39:49.589 "ddgst": false, 00:39:49.589 "multipath": "multipath" 00:39:49.589 } 00:39:49.589 }, 00:39:49.589 { 00:39:49.589 "method": "bdev_nvme_set_hotplug", 00:39:49.589 "params": { 00:39:49.589 "period_us": 100000, 00:39:49.589 "enable": false 00:39:49.589 } 00:39:49.589 }, 00:39:49.589 { 00:39:49.589 "method": "bdev_wait_for_examine" 00:39:49.589 } 00:39:49.589 ] 00:39:49.589 }, 00:39:49.589 { 00:39:49.589 "subsystem": "nbd", 00:39:49.589 "config": [] 00:39:49.589 } 00:39:49.589 ] 00:39:49.589 }' 00:39:49.589 13:14:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:49.589 13:14:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:49.589 [2024-11-25 13:14:29.466521] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:39:49.589 [2024-11-25 13:14:29.466570] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991144 ] 00:39:49.849 [2024-11-25 13:14:29.520924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.849 [2024-11-25 13:14:29.549971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:49.849 [2024-11-25 13:14:29.693184] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:50.420 13:14:30 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:50.420 13:14:30 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:50.420 13:14:30 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:50.420 13:14:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.420 13:14:30 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:50.680 13:14:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:50.680 13:14:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:50.680 13:14:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:50.680 13:14:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:50.680 13:14:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:50.680 13:14:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.680 13:14:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:50.939 13:14:30 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:50.940 13:14:30 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:50.940 13:14:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:50.940 13:14:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:50.940 13:14:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:50.940 13:14:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.940 13:14:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:50.940 13:14:30 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:50.940 13:14:30 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:50.940 13:14:30 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:50.940 13:14:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:51.200 13:14:30 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:51.200 13:14:30 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:51.200 13:14:30 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.HLFmAI4cLe /tmp/tmp.FVkmayfdSb 00:39:51.200 13:14:30 keyring_file -- keyring/file.sh@20 -- # killprocess 991144 00:39:51.200 13:14:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 991144 ']' 00:39:51.200 13:14:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 991144 00:39:51.200 13:14:30 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:51.200 13:14:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.200 13:14:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 991144 00:39:51.200 13:14:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:51.200 13:14:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:51.200 13:14:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 991144' 00:39:51.200 killing process with pid 991144 00:39:51.200 13:14:31 keyring_file -- common/autotest_common.sh@973 -- # kill 991144 00:39:51.200 Received shutdown signal, test time was about 1.000000 seconds 00:39:51.200 00:39:51.200 Latency(us) 00:39:51.200 [2024-11-25T12:14:31.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.200 [2024-11-25T12:14:31.103Z] =================================================================================================================== 00:39:51.200 [2024-11-25T12:14:31.103Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:51.200 13:14:31 keyring_file -- common/autotest_common.sh@978 -- # wait 991144 00:39:51.460 13:14:31 keyring_file -- keyring/file.sh@21 -- # killprocess 989334 00:39:51.460 13:14:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 989334 ']' 00:39:51.460 13:14:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 989334 00:39:51.460 13:14:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:51.460 13:14:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.460 13:14:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 989334 00:39:51.460 13:14:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:51.460 13:14:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:51.460 13:14:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 989334' 00:39:51.460 killing process with pid 989334 00:39:51.460 13:14:31 keyring_file -- common/autotest_common.sh@973 -- # kill 989334 00:39:51.460 13:14:31 keyring_file -- common/autotest_common.sh@978 -- # wait 989334 00:39:51.721 00:39:51.721 real 0m11.287s 00:39:51.721 user 0m27.719s 00:39:51.721 sys 0m2.630s 00:39:51.721 13:14:31 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:51.721 13:14:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:51.721 ************************************ 00:39:51.721 END TEST keyring_file 00:39:51.721 ************************************ 00:39:51.721 13:14:31 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:51.721 13:14:31 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:51.721 13:14:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:51.721 13:14:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:51.721 13:14:31 -- common/autotest_common.sh@10 -- # set +x 00:39:51.721 ************************************ 00:39:51.721 START TEST keyring_linux 00:39:51.721 ************************************ 00:39:51.721 13:14:31 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:51.721 Joined session keyring: 582163434 00:39:51.721 * Looking for test storage... 00:39:51.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:51.721 13:14:31 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:51.721 13:14:31 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:39:51.721 13:14:31 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:51.983 13:14:31 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:51.983 13:14:31 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:51.983 13:14:31 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:51.983 13:14:31 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:51.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.983 --rc genhtml_branch_coverage=1 00:39:51.983 --rc genhtml_function_coverage=1 00:39:51.983 --rc genhtml_legend=1 00:39:51.983 --rc geninfo_all_blocks=1 00:39:51.983 --rc geninfo_unexecuted_blocks=1 00:39:51.983 00:39:51.983 ' 00:39:51.983 13:14:31 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:51.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.983 --rc genhtml_branch_coverage=1 00:39:51.983 --rc genhtml_function_coverage=1 00:39:51.983 --rc genhtml_legend=1 00:39:51.983 --rc geninfo_all_blocks=1 00:39:51.983 --rc geninfo_unexecuted_blocks=1 00:39:51.983 00:39:51.983 ' 00:39:51.983 13:14:31 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:51.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.983 --rc genhtml_branch_coverage=1 00:39:51.983 --rc genhtml_function_coverage=1 00:39:51.983 --rc genhtml_legend=1 00:39:51.983 --rc geninfo_all_blocks=1 00:39:51.983 --rc geninfo_unexecuted_blocks=1 00:39:51.983 00:39:51.983 ' 00:39:51.983 13:14:31 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:51.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.983 --rc genhtml_branch_coverage=1 00:39:51.983 --rc genhtml_function_coverage=1 00:39:51.983 --rc genhtml_legend=1 00:39:51.983 --rc geninfo_all_blocks=1 00:39:51.983 --rc geninfo_unexecuted_blocks=1 00:39:51.983 00:39:51.983 ' 00:39:51.983 13:14:31 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:51.983 13:14:31 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:51.983 13:14:31 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:51.983 13:14:31 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:51.983 13:14:31 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:51.983 13:14:31 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:51.983 13:14:31 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:51.983 13:14:31 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:51.984 13:14:31 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:51.984 13:14:31 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:51.984 13:14:31 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:51.984 13:14:31 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:51.984 13:14:31 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.984 13:14:31 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.984 13:14:31 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.984 13:14:31 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:51.984 13:14:31 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:51.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:51.984 13:14:31 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:51.984 13:14:31 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:51.984 13:14:31 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:51.984 13:14:31 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:51.984 13:14:31 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:51.984 13:14:31 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:51.984 /tmp/:spdk-test:key0 00:39:51.984 13:14:31 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:51.984 13:14:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:51.984 13:14:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:51.984 /tmp/:spdk-test:key1 00:39:51.984 13:14:31 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:51.984 13:14:31 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=991586 00:39:51.984 13:14:31 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 991586 00:39:51.984 13:14:31 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 991586 ']' 00:39:51.984 13:14:31 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:51.984 13:14:31 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:51.984 13:14:31 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:51.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:51.984 13:14:31 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:51.984 13:14:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:51.984 [2024-11-25 13:14:31.875229] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:39:51.984 [2024-11-25 13:14:31.875301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991586 ] 00:39:52.244 [2024-11-25 13:14:31.957768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.244 [2024-11-25 13:14:31.999156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:52.815 13:14:32 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:52.815 13:14:32 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:52.815 13:14:32 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:52.815 13:14:32 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.815 13:14:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:52.815 [2024-11-25 13:14:32.685568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:52.815 null0 00:39:52.815 [2024-11-25 13:14:32.717619] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:53.077 [2024-11-25 13:14:32.718027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:53.077 13:14:32 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.077 13:14:32 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:53.077 830382863 00:39:53.077 13:14:32 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:53.077 374060334 00:39:53.077 13:14:32 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=991815 00:39:53.077 13:14:32 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 991815 /var/tmp/bperf.sock 00:39:53.077 13:14:32 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:53.077 13:14:32 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 991815 ']' 00:39:53.077 13:14:32 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:53.077 13:14:32 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:53.077 13:14:32 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:53.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:53.077 13:14:32 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:53.077 13:14:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:53.077 [2024-11-25 13:14:32.807801] Starting SPDK v25.01-pre git sha1 e4a86cc92 / DPDK 24.03.0 initialization... 00:39:53.077 [2024-11-25 13:14:32.807852] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991815 ] 00:39:53.077 [2024-11-25 13:14:32.894451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.077 [2024-11-25 13:14:32.924578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:54.017 13:14:33 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:54.017 13:14:33 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:54.017 13:14:33 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:54.017 13:14:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:54.017 13:14:33 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:54.017 13:14:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:54.276 13:14:33 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:54.276 13:14:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:54.276 [2024-11-25 13:14:34.129066] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:54.536 nvme0n1 00:39:54.536 13:14:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:54.536 13:14:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:54.536 13:14:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:54.536 13:14:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:54.536 13:14:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:54.536 13:14:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.536 13:14:34 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:54.536 13:14:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:54.536 13:14:34 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:54.536 13:14:34 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:54.536 13:14:34 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:54.536 13:14:34 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:54.536 13:14:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.796 13:14:34 keyring_linux -- keyring/linux.sh@25 -- # sn=830382863 00:39:54.796 13:14:34 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:54.796 13:14:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:54.796 13:14:34 keyring_linux -- keyring/linux.sh@26 -- # [[ 830382863 == \8\3\0\3\8\2\8\6\3 ]] 00:39:54.796 13:14:34 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 830382863 00:39:54.796 13:14:34 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:54.796 13:14:34 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:54.796 Running I/O for 1 seconds... 00:39:56.174 16150.00 IOPS, 63.09 MiB/s 00:39:56.174 Latency(us) 00:39:56.174 [2024-11-25T12:14:36.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:56.174 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:56.174 nvme0n1 : 1.01 16151.98 63.09 0.00 0.00 7891.28 6990.51 15619.41 00:39:56.174 [2024-11-25T12:14:36.077Z] =================================================================================================================== 00:39:56.174 [2024-11-25T12:14:36.077Z] Total : 16151.98 63.09 0.00 0.00 7891.28 6990.51 15619.41 00:39:56.174 { 00:39:56.174 "results": [ 00:39:56.174 { 00:39:56.174 "job": "nvme0n1", 00:39:56.174 "core_mask": "0x2", 00:39:56.174 "workload": "randread", 00:39:56.174 "status": "finished", 00:39:56.174 "queue_depth": 128, 00:39:56.174 "io_size": 4096, 00:39:56.174 "runtime": 1.007802, 00:39:56.174 "iops": 16151.982234605606, 00:39:56.174 "mibps": 63.09368060392815, 00:39:56.174 "io_failed": 0, 00:39:56.174 "io_timeout": 0, 00:39:56.174 "avg_latency_us": 7891.276065036655, 00:39:56.174 "min_latency_us": 6990.506666666667, 00:39:56.174 "max_latency_us": 15619.413333333334 00:39:56.174 } 00:39:56.174 ], 00:39:56.174 "core_count": 1 00:39:56.174 } 00:39:56.175 13:14:35 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:56.175 13:14:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:56.175 13:14:35 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:56.175 13:14:35 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:56.175 13:14:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:56.175 13:14:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:56.175 13:14:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:56.175 13:14:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:56.435 13:14:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:56.435 [2024-11-25 13:14:36.248065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:56.435 [2024-11-25 13:14:36.248234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fc2e0 (107): Transport endpoint is not connected 00:39:56.435 [2024-11-25 13:14:36.249230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fc2e0 (9): Bad file descriptor 00:39:56.435 [2024-11-25 13:14:36.250232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:56.435 [2024-11-25 13:14:36.250243] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:56.435 [2024-11-25 13:14:36.250249] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:56.435 [2024-11-25 13:14:36.250255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:56.435 request: 00:39:56.435 { 00:39:56.435 "name": "nvme0", 00:39:56.435 "trtype": "tcp", 00:39:56.435 "traddr": "127.0.0.1", 00:39:56.435 "adrfam": "ipv4", 00:39:56.435 "trsvcid": "4420", 00:39:56.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:56.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:56.435 "prchk_reftag": false, 00:39:56.435 "prchk_guard": false, 00:39:56.435 "hdgst": false, 00:39:56.435 "ddgst": false, 00:39:56.435 "psk": ":spdk-test:key1", 00:39:56.435 "allow_unrecognized_csi": false, 00:39:56.435 "method": "bdev_nvme_attach_controller", 00:39:56.435 "req_id": 1 00:39:56.435 } 00:39:56.435 Got JSON-RPC error response 00:39:56.435 response: 00:39:56.435 { 00:39:56.435 "code": -5, 00:39:56.435 "message": "Input/output error" 00:39:56.435 } 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@33 -- # sn=830382863 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 830382863 00:39:56.435 1 links removed 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@33 -- # sn=374060334 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 374060334 00:39:56.435 1 links removed 00:39:56.435 13:14:36 keyring_linux -- keyring/linux.sh@41 -- # killprocess 991815 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 991815 ']' 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 991815 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:56.435 13:14:36 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 991815 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 991815' 00:39:56.695 killing process with pid 991815 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@973 -- # kill 991815 00:39:56.695 Received shutdown signal, test time was about 1.000000 seconds 00:39:56.695 00:39:56.695 Latency(us) 00:39:56.695 [2024-11-25T12:14:36.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:56.695 [2024-11-25T12:14:36.598Z] =================================================================================================================== 00:39:56.695 [2024-11-25T12:14:36.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@978 -- # wait 991815 00:39:56.695 13:14:36 keyring_linux -- keyring/linux.sh@42 -- # killprocess 991586 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 991586 ']' 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 991586 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 991586 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 991586' 00:39:56.695 killing process with pid 991586 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@973 -- # kill 991586 00:39:56.695 13:14:36 keyring_linux -- common/autotest_common.sh@978 -- # wait 991586 00:39:56.955 00:39:56.955 real 0m5.220s 00:39:56.955 user 0m9.736s 00:39:56.955 sys 0m1.371s 00:39:56.955 13:14:36 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:56.955 13:14:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:56.955 ************************************ 00:39:56.955 END TEST keyring_linux 00:39:56.955 ************************************ 00:39:56.955 13:14:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:56.955 13:14:36 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:56.955 13:14:36 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:56.955 13:14:36 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:56.955 13:14:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:56.955 13:14:36 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:56.955 13:14:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:56.955 13:14:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:56.955 13:14:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:56.955 13:14:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:56.955 13:14:36 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:56.955 13:14:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:56.955 13:14:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:56.955 13:14:36 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:56.955 13:14:36 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:56.955 13:14:36 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:56.955 13:14:36 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:56.955 13:14:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:56.955 13:14:36 -- common/autotest_common.sh@10 -- # set +x 00:39:56.955 13:14:36 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:56.955 13:14:36 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:56.955 13:14:36 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:56.955 13:14:36 -- common/autotest_common.sh@10 -- # set +x 00:40:05.094 INFO: APP EXITING 00:40:05.094 INFO: killing all VMs 00:40:05.094 INFO: killing vhost app 00:40:05.094 INFO: EXIT DONE 00:40:07.641 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:65:00.0 (144d a80a): Already using the nvme driver 00:40:07.641 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:40:07.641 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:40:11.851 Cleaning 00:40:11.851 Removing: /var/run/dpdk/spdk0/config 00:40:11.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:11.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:11.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:11.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:11.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:11.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:11.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:11.851 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:11.851 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:11.851 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:11.851 Removing: /var/run/dpdk/spdk1/config 00:40:11.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:11.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:11.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:11.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:11.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:11.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:11.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:11.851 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:11.851 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:11.851 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:11.851 Removing: /var/run/dpdk/spdk2/config 00:40:11.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:11.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:11.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:11.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:11.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:11.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:11.851 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:12.112 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:12.113 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:12.113 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:12.113 Removing: /var/run/dpdk/spdk3/config 00:40:12.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:12.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:12.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:12.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:12.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:12.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:12.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:12.113 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:12.113 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:12.113 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:12.113 Removing: /var/run/dpdk/spdk4/config 00:40:12.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:12.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:12.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:12.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:12.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:12.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:12.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:12.113 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:12.113 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:12.113 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:12.113 Removing: /dev/shm/bdev_svc_trace.1 00:40:12.113 Removing: /dev/shm/nvmf_trace.0 00:40:12.113 Removing: /dev/shm/spdk_tgt_trace.pid375710 00:40:12.113 Removing: /var/run/dpdk/spdk0 00:40:12.113 Removing: /var/run/dpdk/spdk1 00:40:12.113 Removing: /var/run/dpdk/spdk2 00:40:12.113 Removing: /var/run/dpdk/spdk3 00:40:12.113 Removing: /var/run/dpdk/spdk4 00:40:12.113 Removing: /var/run/dpdk/spdk_pid373860 00:40:12.113 Removing: /var/run/dpdk/spdk_pid375710 00:40:12.113 Removing: /var/run/dpdk/spdk_pid376248 00:40:12.113 Removing: /var/run/dpdk/spdk_pid377867 00:40:12.113 Removing: /var/run/dpdk/spdk_pid378079 00:40:12.113 Removing: /var/run/dpdk/spdk_pid379326 00:40:12.113 Removing: /var/run/dpdk/spdk_pid379481 00:40:12.113 Removing: /var/run/dpdk/spdk_pid379936 00:40:12.113 Removing: /var/run/dpdk/spdk_pid380974 00:40:12.113 Removing: /var/run/dpdk/spdk_pid381548 00:40:12.113 Removing: /var/run/dpdk/spdk_pid381940 00:40:12.113 Removing: /var/run/dpdk/spdk_pid382340 00:40:12.113 Removing: /var/run/dpdk/spdk_pid382752 00:40:12.113 Removing: /var/run/dpdk/spdk_pid383064 00:40:12.113 Removing: /var/run/dpdk/spdk_pid383203 00:40:12.113 Removing: /var/run/dpdk/spdk_pid383538 00:40:12.113 Removing: /var/run/dpdk/spdk_pid383920 00:40:12.113 Removing: /var/run/dpdk/spdk_pid384981 00:40:12.113 Removing: /var/run/dpdk/spdk_pid388267 00:40:12.113 Removing: /var/run/dpdk/spdk_pid388639 00:40:12.113 Removing: /var/run/dpdk/spdk_pid388988 00:40:12.113 Removing: /var/run/dpdk/spdk_pid389321 00:40:12.113 Removing: /var/run/dpdk/spdk_pid389703 00:40:12.113 Removing: /var/run/dpdk/spdk_pid389915 00:40:12.113 Removing: /var/run/dpdk/spdk_pid390401 00:40:12.113 Removing: /var/run/dpdk/spdk_pid390417 00:40:12.375 Removing: /var/run/dpdk/spdk_pid390786 00:40:12.375 Removing: /var/run/dpdk/spdk_pid391064 00:40:12.375 Removing: /var/run/dpdk/spdk_pid391156 00:40:12.375 Removing: /var/run/dpdk/spdk_pid391490 00:40:12.375 Removing: /var/run/dpdk/spdk_pid391940 00:40:12.375 Removing: /var/run/dpdk/spdk_pid392294 00:40:12.376 Removing: /var/run/dpdk/spdk_pid392648 00:40:12.376 Removing: /var/run/dpdk/spdk_pid397830 00:40:12.376 Removing: /var/run/dpdk/spdk_pid403641 00:40:12.376 Removing: /var/run/dpdk/spdk_pid416054 00:40:12.376 Removing: /var/run/dpdk/spdk_pid416803 00:40:12.376 Removing: /var/run/dpdk/spdk_pid422522 00:40:12.376 Removing: /var/run/dpdk/spdk_pid422966 00:40:12.376 Removing: /var/run/dpdk/spdk_pid429194 00:40:12.376 Removing: /var/run/dpdk/spdk_pid436887 00:40:12.376 Removing: /var/run/dpdk/spdk_pid440014 00:40:12.376 Removing: /var/run/dpdk/spdk_pid453607 00:40:12.376 Removing: /var/run/dpdk/spdk_pid465686 00:40:12.376 Removing: /var/run/dpdk/spdk_pid467701 00:40:12.376 Removing: /var/run/dpdk/spdk_pid468717 00:40:12.376 Removing: /var/run/dpdk/spdk_pid491947 00:40:12.376 Removing: /var/run/dpdk/spdk_pid497139 00:40:12.376 Removing: /var/run/dpdk/spdk_pid558452 00:40:12.376 Removing: /var/run/dpdk/spdk_pid565278 00:40:12.376 Removing: /var/run/dpdk/spdk_pid573034 00:40:12.376 Removing: /var/run/dpdk/spdk_pid581330 00:40:12.376 Removing: /var/run/dpdk/spdk_pid581335 00:40:12.376 Removing: /var/run/dpdk/spdk_pid582351 00:40:12.376 Removing: /var/run/dpdk/spdk_pid583358 00:40:12.376 Removing: /var/run/dpdk/spdk_pid584385 00:40:12.376 Removing: /var/run/dpdk/spdk_pid585258 00:40:12.376 Removing: /var/run/dpdk/spdk_pid585345 00:40:12.376 Removing: /var/run/dpdk/spdk_pid585669 00:40:12.376 Removing: /var/run/dpdk/spdk_pid585691 00:40:12.376 Removing: /var/run/dpdk/spdk_pid585702 00:40:12.376 Removing: /var/run/dpdk/spdk_pid586702 00:40:12.376 Removing: /var/run/dpdk/spdk_pid587706 00:40:12.376 Removing: /var/run/dpdk/spdk_pid588748 00:40:12.376 Removing: /var/run/dpdk/spdk_pid589502 00:40:12.376 Removing: /var/run/dpdk/spdk_pid589644 00:40:12.376 Removing: /var/run/dpdk/spdk_pid589893 00:40:12.376 Removing: /var/run/dpdk/spdk_pid591800 00:40:12.376 Removing: /var/run/dpdk/spdk_pid593119 00:40:12.376 Removing: /var/run/dpdk/spdk_pid603811 00:40:12.376 Removing: /var/run/dpdk/spdk_pid640611 00:40:12.376 Removing: /var/run/dpdk/spdk_pid646406 00:40:12.376 Removing: /var/run/dpdk/spdk_pid648385 00:40:12.376 Removing: /var/run/dpdk/spdk_pid650525 00:40:12.376 Removing: /var/run/dpdk/spdk_pid650744 00:40:12.376 Removing: /var/run/dpdk/spdk_pid650771 00:40:12.376 Removing: /var/run/dpdk/spdk_pid651088 00:40:12.376 Removing: /var/run/dpdk/spdk_pid651641 00:40:12.376 Removing: /var/run/dpdk/spdk_pid653820 00:40:12.376 Removing: /var/run/dpdk/spdk_pid654905 00:40:12.376 Removing: /var/run/dpdk/spdk_pid655286 00:40:12.376 Removing: /var/run/dpdk/spdk_pid657992 00:40:12.376 Removing: /var/run/dpdk/spdk_pid658704 00:40:12.376 Removing: /var/run/dpdk/spdk_pid659413 00:40:12.376 Removing: /var/run/dpdk/spdk_pid664909 00:40:12.376 Removing: /var/run/dpdk/spdk_pid672236 00:40:12.376 Removing: /var/run/dpdk/spdk_pid672237 00:40:12.639 Removing: /var/run/dpdk/spdk_pid672238 00:40:12.639 Removing: /var/run/dpdk/spdk_pid677524 00:40:12.639 Removing: /var/run/dpdk/spdk_pid689343 00:40:12.639 Removing: /var/run/dpdk/spdk_pid694137 00:40:12.639 Removing: /var/run/dpdk/spdk_pid701771 00:40:12.639 Removing: /var/run/dpdk/spdk_pid703261 00:40:12.639 Removing: /var/run/dpdk/spdk_pid704860 00:40:12.639 Removing: /var/run/dpdk/spdk_pid706710 00:40:12.639 Removing: /var/run/dpdk/spdk_pid712778 00:40:12.639 Removing: /var/run/dpdk/spdk_pid718663 00:40:12.639 Removing: /var/run/dpdk/spdk_pid724306 00:40:12.639 Removing: /var/run/dpdk/spdk_pid734439 00:40:12.639 Removing: /var/run/dpdk/spdk_pid734441 00:40:12.639 Removing: /var/run/dpdk/spdk_pid740725 00:40:12.639 Removing: /var/run/dpdk/spdk_pid740909 00:40:12.639 Removing: /var/run/dpdk/spdk_pid741074 00:40:12.639 Removing: /var/run/dpdk/spdk_pid741732 00:40:12.639 Removing: /var/run/dpdk/spdk_pid741737 00:40:12.639 Removing: /var/run/dpdk/spdk_pid747799 00:40:12.639 Removing: /var/run/dpdk/spdk_pid748377 00:40:12.639 Removing: /var/run/dpdk/spdk_pid754274 00:40:12.639 Removing: /var/run/dpdk/spdk_pid757524 00:40:12.639 Removing: /var/run/dpdk/spdk_pid764600 00:40:12.639 Removing: /var/run/dpdk/spdk_pid771843 00:40:12.639 Removing: /var/run/dpdk/spdk_pid782444 00:40:12.639 Removing: /var/run/dpdk/spdk_pid792232 00:40:12.639 Removing: /var/run/dpdk/spdk_pid792299 00:40:12.639 Removing: /var/run/dpdk/spdk_pid817490 00:40:12.639 Removing: /var/run/dpdk/spdk_pid818176 00:40:12.639 Removing: /var/run/dpdk/spdk_pid818858 00:40:12.639 Removing: /var/run/dpdk/spdk_pid819544 00:40:12.639 Removing: /var/run/dpdk/spdk_pid820601 00:40:12.639 Removing: /var/run/dpdk/spdk_pid821285 00:40:12.639 Removing: /var/run/dpdk/spdk_pid821967 00:40:12.639 Removing: /var/run/dpdk/spdk_pid822673 00:40:12.639 Removing: /var/run/dpdk/spdk_pid828378 00:40:12.639 Removing: /var/run/dpdk/spdk_pid828713 00:40:12.639 Removing: /var/run/dpdk/spdk_pid836427 00:40:12.639 Removing: /var/run/dpdk/spdk_pid836717 00:40:12.639 Removing: /var/run/dpdk/spdk_pid843735 00:40:12.639 Removing: /var/run/dpdk/spdk_pid849879 00:40:12.639 Removing: /var/run/dpdk/spdk_pid862005 00:40:12.639 Removing: /var/run/dpdk/spdk_pid862788 00:40:12.639 Removing: /var/run/dpdk/spdk_pid868370 00:40:12.639 Removing: /var/run/dpdk/spdk_pid868772 00:40:12.639 Removing: /var/run/dpdk/spdk_pid874419 00:40:12.639 Removing: /var/run/dpdk/spdk_pid881824 00:40:12.639 Removing: /var/run/dpdk/spdk_pid884707 00:40:12.639 Removing: /var/run/dpdk/spdk_pid898536 00:40:12.639 Removing: /var/run/dpdk/spdk_pid910027 00:40:12.639 Removing: /var/run/dpdk/spdk_pid912033 00:40:12.639 Removing: /var/run/dpdk/spdk_pid913046 00:40:12.639 Removing: /var/run/dpdk/spdk_pid934022 00:40:12.639 Removing: /var/run/dpdk/spdk_pid939334 00:40:12.639 Removing: /var/run/dpdk/spdk_pid942602 00:40:12.639 Removing: /var/run/dpdk/spdk_pid950551 00:40:12.639 Removing: /var/run/dpdk/spdk_pid950641 00:40:12.639 Removing: /var/run/dpdk/spdk_pid957811 00:40:12.639 Removing: /var/run/dpdk/spdk_pid960163 00:40:12.639 Removing: /var/run/dpdk/spdk_pid962415 00:40:12.639 Removing: /var/run/dpdk/spdk_pid963871 00:40:12.639 Removing: /var/run/dpdk/spdk_pid966315 00:40:12.901 Removing: /var/run/dpdk/spdk_pid967599 00:40:12.901 Removing: /var/run/dpdk/spdk_pid978761 00:40:12.901 Removing: /var/run/dpdk/spdk_pid979234 00:40:12.901 Removing: /var/run/dpdk/spdk_pid979815 00:40:12.901 Removing: /var/run/dpdk/spdk_pid982892 00:40:12.901 Removing: /var/run/dpdk/spdk_pid983538 00:40:12.901 Removing: /var/run/dpdk/spdk_pid984206 00:40:12.901 Removing: /var/run/dpdk/spdk_pid989334 00:40:12.901 Removing: /var/run/dpdk/spdk_pid989338 00:40:12.901 Removing: /var/run/dpdk/spdk_pid991144 00:40:12.902 Removing: /var/run/dpdk/spdk_pid991586 00:40:12.902 Removing: /var/run/dpdk/spdk_pid991815 00:40:12.902 Clean 00:40:12.902 13:14:52 -- common/autotest_common.sh@1453 -- # return 0 00:40:12.902 13:14:52 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:40:12.902 13:14:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:12.902 13:14:52 -- common/autotest_common.sh@10 -- # set +x 00:40:12.902 13:14:52 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:40:12.902 13:14:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:12.902 13:14:52 -- common/autotest_common.sh@10 -- # set +x 00:40:12.902 13:14:52 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:12.902 13:14:52 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:12.902 13:14:52 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:12.902 13:14:52 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:40:12.902 13:14:52 -- spdk/autotest.sh@398 -- # hostname 00:40:12.902 13:14:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:13.163 geninfo: WARNING: invalid characters removed from testname! 00:40:39.747 13:15:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:41.703 13:15:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:43.150 13:15:23 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:45.062 13:15:24 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:46.446 13:15:26 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:48.379 13:15:27 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:49.763 13:15:29 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:49.763 13:15:29 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:49.763 13:15:29 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:49.763 13:15:29 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:49.763 13:15:29 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:49.763 13:15:29 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:50.023 + [[ -n 288688 ]] 00:40:50.024 + sudo kill 288688 00:40:50.035 [Pipeline] } 00:40:50.054 [Pipeline] // stage 00:40:50.060 [Pipeline] } 00:40:50.077 [Pipeline] // timeout 00:40:50.083 [Pipeline] } 00:40:50.097 [Pipeline] // catchError 00:40:50.104 [Pipeline] } 00:40:50.119 [Pipeline] // wrap 00:40:50.125 [Pipeline] } 00:40:50.138 [Pipeline] // catchError 00:40:50.149 [Pipeline] stage 00:40:50.151 [Pipeline] { (Epilogue) 00:40:50.166 [Pipeline] catchError 00:40:50.168 [Pipeline] { 00:40:50.182 [Pipeline] echo 00:40:50.184 Cleanup processes 00:40:50.190 [Pipeline] sh 00:40:50.481 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:50.481 1005840 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:50.496 [Pipeline] sh 00:40:50.784 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:50.784 ++ grep -v 'sudo pgrep' 00:40:50.784 ++ awk '{print $1}' 00:40:50.784 + sudo kill -9 00:40:50.784 + true 00:40:50.800 [Pipeline] sh 00:40:51.094 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:03.369 [Pipeline] sh 00:41:03.657 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:03.657 Artifacts sizes are good 00:41:03.674 [Pipeline] archiveArtifacts 00:41:03.683 Archiving artifacts 00:41:03.847 [Pipeline] sh 00:41:04.142 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:04.162 [Pipeline] cleanWs 00:41:04.175 [WS-CLEANUP] Deleting project workspace... 00:41:04.175 [WS-CLEANUP] Deferred wipeout is used... 00:41:04.182 [WS-CLEANUP] done 00:41:04.184 [Pipeline] } 00:41:04.206 [Pipeline] // catchError 00:41:04.218 [Pipeline] sh 00:41:04.510 + logger -p user.info -t JENKINS-CI 00:41:04.520 [Pipeline] } 00:41:04.534 [Pipeline] // stage 00:41:04.540 [Pipeline] } 00:41:04.555 [Pipeline] // node 00:41:04.561 [Pipeline] End of Pipeline 00:41:04.601 Finished: SUCCESS